Discussion:
[Sks-devel] New Keyservers and Dumps
Eric Germann
2018-08-20 13:26:23 UTC
Permalink
I’ve reworked the keyserver fleet we’d previously deployed and made a blog post [1] about it. If you’d peered with me before, those have most likely been cleaned out as I diversified the fleet across different cities and rebuilt them. They are TLS enabled, but just with a standard cert, not an SKS signed cert. PGP on a Mac seems to work fine. I’d be curious of reports from other clients about hkps issues or success.

I’m also providing nightly dumps of the PGP database blogged about here [2].

Any question or peering requests can be sent to ***@semperen.com (PGP ID 0x55D89385152D11CD3B930C39495C22B395C821E4)

Thanks

EKG

[1] https://7layers.semperen.com/content/pgp-keyservers-available-production
[2] https://7layers.semperen.com/content/pgp-keyserver-dumps-now-available
Kristian Fiskerstrand
2018-08-23 13:49:40 UTC
Permalink
Post by Eric Germann
I’ve reworked the keyserver fleet we’d previously deployed and made a blog post [1] about it.
Are the servers clustered in any way? In my experience each site needs
at least 3 nodes to ensure proper operation (mainly if A and B are
gossipping C can still respond to requests, depending on the amount of
traffic / speed of the node to return more is better)

So clustered setup is more important than large number of individual
servers, as there is no retry functionality in dirmngr.

I'm still looking for more clustered setups to include into hkps pool,
in particular since noticing an interesting feature if only one server
is included, which disables pool behavior in dirmngr and results in TLS
error / generic error due to CA pem not being loaded...
--
----------------------------
Kristian Fiskerstrand
Blog: https://blog.sumptuouscapital.com
Twitter: @krifisk
----------------------------
Public OpenPGP keyblock at hkp://pool.sks-keyservers.net
fpr:94CB AFDD 3034 5109 5618 35AA 0B7F 8B60 E3ED FAE3
----------------------------
"We all die. The goal isn't to live forever, the goal is to create
something that will."
(Chuck Palahniuk)
Eric Germann
2018-08-23 21:49:54 UTC
Permalink
Since I’ve been rolling these myself, I didn’t know a 3 node cluster was best.

As for the 3, if either putting them behind a LB or doing round-robin, how would the LB or the client know there was a failure on one and move on in the cluster. Most I’ve seen with multiple (??) boxes use two IP’s behind a CNAME doing RR DNS.

FWIW, no one has complained, so not too sure it’s an issue, at least for now.

I do notice I frequently end up with a significant number of them in the hkp pool. They do run hkps on LetsEncrypt certs and seem to sync fine, at least to GPGSuite.

Do you have a best-practices deployment doc, because it’s pretty much been trial by fire. For example, killing the daemon gives you about a 50% chance of blowing up the db. For the longest time I rebuilt, not knowing an “sks cleandb” would fix it 99% of the time.

Docs seem a bit thin. I was trying to up pool count since a lot seem to have gone by the wayside, adding some geo-diversity and running one in Africa. Not sure if there are any others down there.

It’s an interesting experiment. If it’s an issue let me know and I will shut some/it down.

EKG
Post by Kristian Fiskerstrand
Post by Eric Germann
I’ve reworked the keyserver fleet we’d previously deployed and made a blog post [1] about it.
Are the servers clustered in any way? In my experience each site needs
at least 3 nodes to ensure proper operation (mainly if A and B are
gossipping C can still respond to requests, depending on the amount of
traffic / speed of the node to return more is better)
So clustered setup is more important than large number of individual
servers, as there is no retry functionality in dirmngr.
I'm still looking for more clustered setups to include into hkps pool,
in particular since noticing an interesting feature if only one server
is included, which disables pool behavior in dirmngr and results in TLS
error / generic error due to CA pem not being loaded...
--
----------------------------
Kristian Fiskerstrand
Blog: https://blog.sumptuouscapital.com
----------------------------
Public OpenPGP keyblock at hkp://pool.sks-keyservers.net
fpr:94CB AFDD 3034 5109 5618 35AA 0B7F 8B60 E3ED FAE3
----------------------------
"We all die. The goal isn't to live forever, the goal is to create
something that will."
(Chuck Palahniuk)
Kristian Fiskerstrand
2018-08-24 12:55:45 UTC
Permalink
Post by Eric Germann
Since I’ve been rolling these myself, I didn’t know a 3 node cluster was best.
As for the 3, if either putting them behind a LB or doing round-robin, how would the LB or the client know there was a failure on one and move on in the cluster. Most I’ve seen with multiple (??) boxes use two IP’s behind a CNAME doing RR DNS.
it hops to another server after timeout or due to 5xx message from
upstream, e.g (nginx):

upstream sks_servers
{
server 192.168.0.55:11372 weight=5;
server 192.168.0.61:11371 weight=10;
server 192.168.0.36:11371 weight=10;
}

Adding a cache on the LB further improves responses, as discussed previously
Post by Eric Germann
FWIW, no one has complained, so not too sure it’s an issue, at least for now.
I get all the complains, as they say the pool isn't working.
Post by Eric Germann
I do notice I frequently end up with a significant number of them in the hkp pool. They do run hkps on LetsEncrypt certs and seem to sync fine, at least to GPGSuite.
Most traffic goes to hkps pool these days anyways since that is default
in gnupg.
Post by Eric Germann
Do you have a best-practices deployment doc, because it’s pretty much been trial by fire. For example, killing the daemon gives you about a 50% chance of blowing up the db. For the longest time I rebuilt, not knowing an “sks cleandb” would fix it 99% of the time.
Very few scenarios where you would kill the daemon though, but the
archive of the ML has many discussions, you also have
https://bitbucket.org/skskeyserver/sks-keyserver/wiki/Peering giving
good pointers.
Post by Eric Germann
Docs seem a bit thin. I was trying to up pool count since a lot seem to have gone by the wayside, adding some geo-diversity and running one in Africa. Not sure if there are any others down there.
It’s an interesting experiment. If it’s an issue let me know and I will shut some/it down.
Its not an issue, but in practice it doesn't necessarily add much value
either, more clustered setups are more important for the ecosystem than
even more individual servers.
Post by Eric Germann
EKG
Post by Kristian Fiskerstrand
Post by Eric Germann
I’ve reworked the keyserver fleet we’d previously deployed and made a blog post [1] about it.
Are the servers clustered in any way? In my experience each site needs
at least 3 nodes to ensure proper operation (mainly if A and B are
gossipping C can still respond to requests, depending on the amount of
traffic / speed of the node to return more is better)
So clustered setup is more important than large number of individual
servers, as there is no retry functionality in dirmngr.
I'm still looking for more clustered setups to include into hkps pool,
in particular since noticing an interesting feature if only one server
is included, which disables pool behavior in dirmngr and results in TLS
error / generic error due to CA pem not being loaded...
--
----------------------------
Kristian Fiskerstrand
Blog: https://blog.sumptuouscapital.com
----------------------------
Public OpenPGP keyblock at hkp://pool.sks-keyservers.net
fpr:94CB AFDD 3034 5109 5618 35AA 0B7F 8B60 E3ED FAE3
----------------------------
"We all die. The goal isn't to live forever, the goal is to create
something that will."
(Chuck Palahniuk)
--
----------------------------
Kristian Fiskerstrand
Blog: https://blog.sumptuouscapital.com
Twitter: @krifisk
----------------------------
Public OpenPGP keyblock at hkp://pool.sks-keyservers.net
fpr:94CB AFDD 3034 5109 5618 35AA 0B7F 8B60 E3ED FAE3
----------------------------
"The laws of Australia prevail in Australia, I can assure you of that.
The laws of mathematics are very commendable, but the only laws that
applies in Australia is the law of Australia."
(Malcolm Turnbull, Prime Minister of Australia).
Gabor Kiss
2018-08-24 09:36:12 UTC
Permalink
Post by Kristian Fiskerstrand
Are the servers clustered in any way? In my experience each site needs
at least 3 nodes to ensure proper operation (mainly if A and B are
gossipping C can still respond to requests, depending on the amount of
traffic / speed of the node to return more is better)
So clustered setup is more important than large number of individual
servers, as there is no retry functionality in dirmngr.
A question:
Does an SKS cluster need multiple storage space,
or nodes can share the database?

Cheers

Gabor
--
A mug of beer, please. Shaken, not stirred.
Michael Jones
2018-08-24 10:46:55 UTC
Permalink
I've setup my cluster with separate filesystems, as I believe locks are
created on the bdb so the sks instances would lock each other if they
shared, otherwise i would have used nfs or gluster.

Kind Regards,
Mike
Post by Gabor Kiss
Post by Kristian Fiskerstrand
Are the servers clustered in any way? In my experience each site needs
at least 3 nodes to ensure proper operation (mainly if A and B are
gossipping C can still respond to requests, depending on the amount of
traffic / speed of the node to return more is better)
So clustered setup is more important than large number of individual
servers, as there is no retry functionality in dirmngr.
Does an SKS cluster need multiple storage space,
or nodes can share the database?
Cheers
Gabor
Kristian Fiskerstrand
2018-08-24 12:36:20 UTC
Permalink
Post by Gabor Kiss
Does an SKS cluster need multiple storage space,
or nodes can share the database?
the DB/storage needs to be separate, but it doesn't require multiple VMs
although I tend to just spin up a new one for each node.
--
----------------------------
Kristian Fiskerstrand
Blog: https://blog.sumptuouscapital.com
Twitter: @krifisk
----------------------------
Public OpenPGP keyblock at hkp://pool.sks-keyservers.net
fpr:94CB AFDD 3034 5109 5618 35AA 0B7F 8B60 E3ED FAE3
----------------------------
"My father used to say: ‘Don’t raise your voice, improve your argument.’"
(Desmond Tutu)
Kiss Gabor (Bitman)
2018-08-24 13:05:53 UTC
Permalink
Post by Kristian Fiskerstrand
Post by Gabor Kiss
Does an SKS cluster need multiple storage space,
or nodes can share the database?
the DB/storage needs to be separate, but it doesn't require multiple VMs
Unfortunately it is the disk space what is the bottleneck at me.
However I consult my colleagues.

Thanks.

Gabor
Alain Wolf
2018-08-26 16:44:42 UTC
Permalink
Hi
Post by Kristian Fiskerstrand
Post by Gabor Kiss
Does an SKS cluster need multiple storage space,
or nodes can share the database?
the DB/storage needs to be separate, but it doesn't require multiple VMs
although I tend to just spin up a new one for each node.
So to clarify, I run a Ubuntu-server 18.04 and assuming I have 100+ GB
of free disk-space:

1) I make two additional copies of /var/lib/sks (22GB as of today).

2) I give them each a nodename in sksconf, but leave the hostname as
it is.

3) I peer all of them with each other in their membership files.

4) I somehow convince systemd to run three instances of sks and
sks-recon, each with its own working-dir.

5) I tell my Nginx to proxy all three of them.

6) I ask around for peers to my two new instances.


A) Is that it?

B) Would this be useful?


Note 1:
I only one single external IPv4-Address, but a delegated IPv6 prefix. So
IPv4 recon will be limited to one of the three instance.

Note 2:
My server is not in the HKPS pool, and probably will not be in the
foreseeable future.


P.S.

Also, if this is so important, I suggest a description in the SKS Wiki,
similar to what we have for Peering and DumpingKeys.

Also I find it a bit confusing that the sks website talks about
load-balancing and this thread talks about clustering.


Regards
Alain
--
pgpkeys.urown.net 11370 # <***@urown.net> 0x27A69FC9A1744242
Kristian Fiskerstrand
2018-08-27 07:18:56 UTC
Permalink
[Sent from my iPad, as it is not a secured device there are no cryptographic keys on this device, meaning this message is sent without an OpenPGP signature. In general you should *not* rely on any information sent over such an unsecure channel, if you find any information controversial or un-expected send a response and request a signed confirmation]
Post by Alain Wolf
Hi
Post by Kristian Fiskerstrand
Post by Gabor Kiss
Does an SKS cluster need multiple storage space,
or nodes can share the database?
the DB/storage needs to be separate, but it doesn't require multiple VMs
although I tend to just spin up a new one for each node.
So to clarify, I run a Ubuntu-server 18.04 and assuming I have 100+ GB
1) I make two additional copies of /var/lib/sks (22GB as of today).
2) I give them each a nodename in sksconf, but leave the hostname as
it is.
RIght.. obviously also ports needs to be distinct
Post by Alain Wolf
3) I peer all of them with each other in their membership files.
4) I somehow convince systemd to run three instances of sks and
sks-recon, each with its own working-dir.
5) I tell my Nginx to proxy all three of them.
6) I ask around for peers to my two new instances.
A) Is that it?
Yup.. that is pretty much it. I also recommend a 10 minute cache on the load balancer
Post by Alain Wolf
B) Would this be useful?
Very much so.. that should be much more reliable
Post by Alain Wolf
I only one single external IPv4-Address, but a delegated IPv6 prefix. So
IPv4 recon will be limited to one of the three instance.
That is what I use myself.. one primary doing external gossipping.. each slave only gossip with master.. one reason for this is you don’t want slaves gossiping with others as that reduces time it is available for respons and you always want at least one node responding.
Fabian A. Santiago
2018-08-27 12:43:08 UTC
Permalink
Post by Kristian Fiskerstrand
[Sent from my iPad, as it is not a secured device there are no
cryptographic keys on this device, meaning this message is sent
without an OpenPGP signature. In general you should *not* rely on any
information sent over such an unsecure channel, if you find any
information controversial or un-expected send a response and request a
signed confirmation]
Post by Alain Wolf
Hi
Post by Kristian Fiskerstrand
Post by Gabor Kiss
Does an SKS cluster need multiple storage space,
or nodes can share the database?
the DB/storage needs to be separate, but it doesn't require multiple VMs
although I tend to just spin up a new one for each node.
So to clarify, I run a Ubuntu-server 18.04 and assuming I have 100+ GB
1) I make two additional copies of /var/lib/sks (22GB as of today).
2) I give them each a nodename in sksconf, but leave the hostname as
it is.
RIght.. obviously also ports needs to be distinct
Post by Alain Wolf
3) I peer all of them with each other in their membership files.
4) I somehow convince systemd to run three instances of sks and
sks-recon, each with its own working-dir.
5) I tell my Nginx to proxy all three of them.
6) I ask around for peers to my two new instances.
A) Is that it?
Yup.. that is pretty much it. I also recommend a 10 minute cache on the load balancer
Post by Alain Wolf
B) Would this be useful?
Very much so.. that should be much more reliable
Post by Alain Wolf
I only one single external IPv4-Address, but a delegated IPv6 prefix. So
IPv4 recon will be limited to one of the three instance.
That is what I use myself.. one primary doing external gossipping..
each slave only gossip with master.. one reason for this is you don’t
want slaves gossiping with others as that reduces time it is available
for respons and you always want at least one node responding.
_______________________________________________
Sks-devel mailing list
https://lists.nongnu.org/mailman/listinfo/sks-devel
really....you view an sks cluster to be nothing more than multiple
instances running on one server? interesting....would there be any
advantage to using multiple servers / vm's? or would that then be
overkill?
--
Fabian S.

OpenPGP:

0x643082042DC83E6D94B86C405E3DAA18A1C22D8F (new key)

***

0x3C3FA072ACCB7AC5DB0F723455502B0EEB9070FC (to be retired, still valid)
Kristian Fiskerstrand
2018-08-28 13:31:23 UTC
Permalink
Post by Fabian A. Santiago
really....you view an sks cluster to be nothing more than multiple
instances running on one server? interesting....would there be any
advantage to using multiple servers / vm's? or would that then be overkill?
There would be the usual advantages if there are other outages, e.g
during system upgrade, but for the purposes we're talking it just needs
to be multiple instances.
--
----------------------------
Kristian Fiskerstrand
Blog: https://blog.sumptuouscapital.com
Twitter: @krifisk
----------------------------
Public OpenPGP keyblock at hkp://pool.sks-keyservers.net
fpr:94CB AFDD 3034 5109 5618 35AA 0B7F 8B60 E3ED FAE3
----------------------------
Potius sero quam numquam
Better late then never
Loading...