Post by Eric GermannSince Iâve been rolling these myself, I didnât know a 3 node cluster was best.
As for the 3, if either putting them behind a LB or doing round-robin, how would the LB or the client know there was a failure on one and move on in the cluster. Most Iâve seen with multiple (??) boxes use two IPâs behind a CNAME doing RR DNS.
it hops to another server after timeout or due to 5xx message from
upstream, e.g (nginx):
upstream sks_servers
{
server 192.168.0.55:11372 weight=5;
server 192.168.0.61:11371 weight=10;
server 192.168.0.36:11371 weight=10;
}
Adding a cache on the LB further improves responses, as discussed previously
Post by Eric GermannFWIW, no one has complained, so not too sure itâs an issue, at least for now.
I get all the complains, as they say the pool isn't working.
Post by Eric GermannI do notice I frequently end up with a significant number of them in the hkp pool. They do run hkps on LetsEncrypt certs and seem to sync fine, at least to GPGSuite.
Most traffic goes to hkps pool these days anyways since that is default
in gnupg.
Post by Eric GermannDo you have a best-practices deployment doc, because itâs pretty much been trial by fire. For example, killing the daemon gives you about a 50% chance of blowing up the db. For the longest time I rebuilt, not knowing an âsks cleandbâ would fix it 99% of the time.
Very few scenarios where you would kill the daemon though, but the
archive of the ML has many discussions, you also have
https://bitbucket.org/skskeyserver/sks-keyserver/wiki/Peering giving
good pointers.
Post by Eric GermannDocs seem a bit thin. I was trying to up pool count since a lot seem to have gone by the wayside, adding some geo-diversity and running one in Africa. Not sure if there are any others down there.
Itâs an interesting experiment. If itâs an issue let me know and I will shut some/it down.
Its not an issue, but in practice it doesn't necessarily add much value
either, more clustered setups are more important for the ecosystem than
even more individual servers.
Post by Eric GermannEKG
Post by Kristian FiskerstrandPost by Eric GermannIâve reworked the keyserver fleet weâd previously deployed and made a blog post [1] about it.
Are the servers clustered in any way? In my experience each site needs
at least 3 nodes to ensure proper operation (mainly if A and B are
gossipping C can still respond to requests, depending on the amount of
traffic / speed of the node to return more is better)
So clustered setup is more important than large number of individual
servers, as there is no retry functionality in dirmngr.
I'm still looking for more clustered setups to include into hkps pool,
in particular since noticing an interesting feature if only one server
is included, which disables pool behavior in dirmngr and results in TLS
error / generic error due to CA pem not being loaded...
--
----------------------------
Kristian Fiskerstrand
Blog: https://blog.sumptuouscapital.com
----------------------------
Public OpenPGP keyblock at hkp://pool.sks-keyservers.net
fpr:94CB AFDD 3034 5109 5618 35AA 0B7F 8B60 E3ED FAE3
----------------------------
"We all die. The goal isn't to live forever, the goal is to create
something that will."
(Chuck Palahniuk)
--
----------------------------
Kristian Fiskerstrand
Blog: https://blog.sumptuouscapital.com
Twitter: @krifisk
----------------------------
Public OpenPGP keyblock at hkp://pool.sks-keyservers.net
fpr:94CB AFDD 3034 5109 5618 35AA 0B7F 8B60 E3ED FAE3
----------------------------
"The laws of Australia prevail in Australia, I can assure you of that.
The laws of mathematics are very commendable, but the only laws that
applies in Australia is the law of Australia."
(Malcolm Turnbull, Prime Minister of Australia).