Discussion:
[Sks-devel] SKS RAM usage gone haywire
Phil Pennock
2009-02-14 03:21:42 UTC
Permalink
Folks,

Normally, sks consumes a negligible amount of the resources on my
server; some GB of disk, some MB RAM (~45 right now, after restart), not
enough network that I've bothered to isolate figures for it.

I just found that my box was thrashing badly because sks had jumped to
using 3GB RAM. On a 2GB box + swap, this was unhealthy.

I'm not seeing anything stand out in db.log or recon.log; has anyone
else seen this behaviour? Any ideas of causes or spoor to look for in
the logs?

Thanks,
-Phil
Daniel Kahn Gillmor
2009-02-14 21:50:48 UTC
Permalink
Post by Phil Pennock
Normally, sks consumes a negligible amount of the resources on my
server; some GB of disk, some MB RAM (~45 right now, after restart), not
enough network that I've bothered to isolate figures for it.
I just found that my box was thrashing badly because sks had jumped to
using 3GB RAM. On a 2GB box + swap, this was unhealthy.
I'm not seeing anything stand out in db.log or recon.log; has anyone
else seen this behaviour? Any ideas of causes or spoor to look for in
the logs?
Yikes! Thanks for pointing that out, because you made me check up on a
keyserver i'm responsible for. It looks like zimmermann.mayfirst.org
is doing the same thing. the sks recon process in particular now has an
RSS of 3.3g.

Sending the process a SIGHUP is not sufficient to make it clean up any
memory either.

After a restart, the recon process only consumes ~5MB of RAM.

I'll be placing sks recon under some stricter memory limits on
zimmermann now, which might mean that it crashes because of hitting
those limits. I suspect there's a memory leak which could be worth
tracking down.

--dkg
Phil Pennock
2009-02-14 22:24:54 UTC
Permalink
Post by Daniel Kahn Gillmor
Yikes! Thanks for pointing that out, because you made me check up on a
keyserver i'm responsible for. It looks like zimmermann.mayfirst.org
is doing the same thing. the sks recon process in particular now has an
RSS of 3.3g.
:-(

So, that's two servers doing the same thing at the same time, so is
there causation to the correlation?

Is anyone else seeing this? And running with enough RAM in the box that
you can afford to investigate?

So, the options behind it, if the correlation is more than coincidence,
seem to be:
1 bad SKS update
2 bad query hitting pool.sks-keyservers.net
3 someone hitting the servers individually deliberately
4 someone doing full sync of a keyserver without an initial DB behind it
5 sudden really *really* heavy query load

I don't have enough experience with SKS to know how badly case 4 would
hit other keyservers.

I'd check the www.sks-keyservers.net website for signs of that, but the
DNS servers at ns1.kfwebs.net and ns2.kfwebs.net have gone lame -- one
is down, the other returning SERVFAIL. So, there's no DNS for
sks-keyservers.net. Thus the explicit CC to Kristian.

This means pool.sks-keyservers.net is also currently a dead name.

My suspicion is aroused by this entry just before the restart:

2009-02-14 03:14:07 Page not found: /var/sks/web/robots.txt

A misbehaving web-spider crawling the entire PGP data-set following
links across signing uids, despite the ? URI query and the non-standard
port, would be ... bad.

I'm about to deploy a robots.txt in my webroots for both port 80 and
11371. Something like:

User-agent: *
Disallow: /pks/

should do it.

-Phil
Daniel Kahn Gillmor
2009-02-14 22:45:08 UTC
Permalink
Post by Phil Pennock
So, the options behind it, if the correlation is more than coincidence,
1 bad SKS update
2 bad query hitting pool.sks-keyservers.net
3 someone hitting the servers individually deliberately
4 someone doing full sync of a keyserver without an initial DB behind it
5 sudden really *really* heavy query load
I think you're missing another possibility:

6 sks recon might leak memory during regular operation

Just observing the pattern of memory usage since i did the sks restart
on zimmermann, it seems to be steadily climbing. An hour after sks
recon had an RSS of < 5000KB, it is now at 7840K.

It was at 3GB when i restarted, but the process had been running without
interruption since 2008.

Maybe someone who knows the source and/or is proficient with the use of
valgrind could assess whether sks recon is actually leaky?

FWIW, zimmermann is running sks 1.1.0-4 from amd64 debian GNU/Linux (lenny).

--dkg
Phil Pennock
2009-02-14 23:14:08 UTC
Permalink
Post by Daniel Kahn Gillmor
Maybe someone who knows the source and/or is proficient with the use of
valgrind could assess whether sks recon is actually leaky?
I had been running without *noticing* any increase for some time and am
inclined to believe that it's a change in observed behaviour.

I saw recon size go to 3GB again, but the RSS was only 11MB, so not so
painful. Thus I'm inclined to think that most of this is DB backing
(/pending/sks/PTree/ptree mmap'ing) and therefore mostly not paged in
and harmless. So, what has changed the working set?

In trying to visit my peers' stats pages, one has no data (DB recent
restart) and one has ... 25503 keys. However, I added that peer in
November, shortly after I myself set up my server. So unless bazon.ru
only recently lost its keys, that looks less likely.

I begin to wonder if recon is sub-optimal with a large delta of keys to
send and also to wonder if I should bump "learn to read OCaml" up my
priority list -- I'm managing to navigate the sks source faster already,
but I'm still mostly in the dark.

I'm fairly sure that the only other recentish change in my setup is
innocent; I set up db_recover to run weekly, but that's on a Saturday
and since I didn't set $PATH to include the tools, automatic runs
wouldn't work until I fixed that today so it has only happened the first
time when I wrote the wrapper script and I restarted the DB server
shortly thereafter anyway because I'd played with sks dump before
discovering that it couldn't be done online.

-Phil
Phil Pennock
2009-02-15 01:59:38 UTC
Permalink
Post by Phil Pennock
I saw recon size go to 3GB again, but the RSS was only 11MB, so not so
1.6GB used for recon, switch MRTG graphs show no increased traffic.

Got to love those clean shutdown semantics which mean that the process
has to page in so that it can quit cleanly. (Had SIGSTOP'd it for
investigation, letting it page partly out).

Since I had rotated the logs, it's interesting to see this:

2009-02-14 23:12:21 Opening log
2009-02-14 23:12:21 sks_recon, SKS version 1.1.0
2009-02-14 23:12:21 Copyright Yaron Minsky 2002-2003
2009-02-14 23:12:21 Licensed under GPL. See COPYING file for details
2009-02-14 23:12:21 Opening PTree database
2009-02-14 23:12:21 Setting up PTree data structure
2009-02-14 23:12:21 PTree setup complete
2009-02-14 23:12:52 Malformed entry
2009-02-14 23:12:52 Malformed entry
2009-02-14 23:12:52 Malformed entry
2009-02-14 23:12:52 Malformed entry
2009-02-14 23:12:52 Malformed entry

but not spectacularly useful.

I see missing keys fetched, hashes successfully recovered; I see
timeouts connecting to remote hosts and connection refused -- I wonder
how many of the former are the remote side thrashing memory so that
userland is unresponsive?

I see also quite a number of these scattered through the recon.log:
2009-02-14 23:26:00 Reconciliation failed. Returning elements returned so far: End_of_file

but nothing identifying which peer it is.

What I do have is information on which peers I have successfully
received keys from, which will help eliminate some.

Hrm, I have quite a few of these:
2009-02-15 00:25:46 Requesting 19 missing keys from <ADDR_INET 208.72.157.55:11371>, starting with 1825B0B0A4E23B7551F06DF13F72C597
2009-02-15 00:25:46 Error getting missing keys: End_of_file

and I know Ryan just rebuilt keys.nayr.net and it's the latest change to
my configs, so out it goes temporarily for purposes of debugging.

The closest I can come to a confirmation is to wait a day and see if it
goes screwy again. :/

-Phil
Ari Trachtenberg
2009-02-15 00:14:41 UTC
Permalink
Yaron Minsky
2009-02-15 15:05:19 UTC
Permalink
Ari is right: there's nothing inherent about the algorithm that should
require an ever-growing use of memory. OCaml itself is very careful about
reclaiming unreferenced memory, but that of course does not preclude a
memory leak in the code.

So far, I have no real clue as to what is going wrong. I could imagine that
the caching at some level is overly aggressive. There are a number of
configuration variables that control how much caching there is. Some of
these are explicit caching numbers that are used by the actual DB, and some
if it is caching that the prefix-tree datastructure does on its own. For
instance, there is a bound (defaulted to 1000) on the number of in-memory
nodes of the prefix-tree.

The idea that some weird query or a server in an unusual state is exercising
some bug that blows up the memory utilization seems possible as well.

Has anyone confirmed if it's the db or recon process that is blowing up in
memory? That would help figure out what's going on. For instance, it's
pretty unlikely that a query from a web-crawler would cause the recon
process to explode in size.

y

On Sat, Feb 14, 2009 at 7:14 PM, Ari Trachtenberg <***@bu.edu> wrote:
Phil Pennock
2009-02-15 23:00:35 UTC
Permalink
Post by Yaron Minsky
Ari is right: there's nothing inherent about the algorithm that should
require an ever-growing use of memory. OCaml itself is very careful about
reclaiming unreferenced memory, but that of course does not preclude a
memory leak in the code.
So far, I have no real clue as to what is going wrong. I could imagine that
the caching at some level is overly aggressive. There are a number of
configuration variables that control how much caching there is. Some of
these are explicit caching numbers that are used by the actual DB, and some
if it is caching that the prefix-tree datastructure does on its own. For
instance, there is a bound (defaulted to 1000) on the number of in-memory
nodes of the prefix-tree.
The idea that some weird query or a server in an unusual state is exercising
some bug that blows up the memory utilization seems possible as well.
Has anyone confirmed if it's the db or recon process that is blowing up in
memory? That would help figure out what's going on. For instance, it's
pretty unlikely that a query from a web-crawler would cause the recon
process to explode in size.
It's recon, and the problem has stopped since I took keys.nayr.net out
of my config and that was the most recent change before things went
ballistic.

Ryan, sorry to name you and point figures publicly while still
investigating, but since at least one other person has seen the same
failure, warning trumps politeness. :(

-Phil
Ryan
2009-02-15 23:33:13 UTC
Permalink
Yea, I am pretty sure I am the one going haywire.. sorry guys, trying
to figure out exactly what the hell is wrong...

it looks like I was going through my peers and requesting the same set
of keys in an endless loop... We can all agree thats bad behavior,
however my poor peers should have never leaked memory like that as my
server was cycling the requests through all the peers with about 120
seconds between each request.. not like I was doing a DOS attack, I
never even noticed any excess CPU or Memory usage on my end.

I am currently back down, rebuliding the database from scratch..
hopefully this will resolve the issue.

-Ryan
Post by Phil Pennock
Post by Yaron Minsky
Ari is right: there's nothing inherent about the algorithm that should
require an ever-growing use of memory. OCaml itself is very
careful about
reclaiming unreferenced memory, but that of course does not
preclude a
memory leak in the code.
So far, I have no real clue as to what is going wrong. I could imagine that
the caching at some level is overly aggressive. There are a number of
configuration variables that control how much caching there is.
Some of
these are explicit caching numbers that are used by the actual DB, and some
if it is caching that the prefix-tree datastructure does on its own. For
instance, there is a bound (defaulted to 1000) on the number of in-
memory
nodes of the prefix-tree.
The idea that some weird query or a server in an unusual state is exercising
some bug that blows up the memory utilization seems possible as well.
Has anyone confirmed if it's the db or recon process that is
blowing up in
memory? That would help figure out what's going on. For instance, it's
pretty unlikely that a query from a web-crawler would cause the recon
process to explode in size.
It's recon, and the problem has stopped since I took keys.nayr.net out
of my config and that was the most recent change before things went
ballistic.
Ryan, sorry to name you and point figures publicly while still
investigating, but since at least one other person has seen the same
failure, warning trumps politeness. :(
-Phil
Daniel Kahn Gillmor
2009-02-16 00:19:33 UTC
Permalink
Yea, I am pretty sure I am the one going haywire.. sorry guys, trying to
figure out exactly what the hell is wrong...
Thanks for the quick and excellent detective work, Phil and Ryan! Since
i restarted sks on zimmermann.mayfirst.org, and Ryan took his system
offline, zimmermann.mayfirst.org's sks recon process is now fluctuating
in RAM usage, but holding well below 20MB (not climbing anywhere close
to 3G!)
it looks like I was going through my peers and requesting the same set
of keys in an endless loop... We can all agree thats bad behavior,
however my poor peers should have never leaked memory like that as my
server was cycling the requests through all the peers with about 120
seconds between each request..
I agree with Ryan here that even if his host *had* been compromised and
doing what amounts to a gossip DoS attack, the other nodes in the
network should not have tried to gobble up all the RAM on their
respective systems. We should not need to fully trust our peers in the
SKS network to run a public keyserver.

Alas, i'm swamped on time right now (and also don't know ocaml), so
about the only thing i can offer for debugging is to set hard RAM limits
on the sks processes zimmermann.mayfirst.org and report back on how the
tool deals with hard RAM exhaustion. If there's any particular data i
should gather in that case, please let me know and i'll try to gather it
and report back.

Regards,

--dkg
John Marshall
2009-02-16 00:50:28 UTC
Permalink
Post by Daniel Kahn Gillmor
I agree with Ryan here that even if his host *had* been compromised and
doing what amounts to a gossip DoS attack, the other nodes in the
network should not have tried to gobble up all the RAM on their
respective systems. We should not need to fully trust our peers in the
SKS network to run a public keyserver.
I observed this insatiable recon behaviour when I was familiarizing
myself with SKS. I built two servers; loaded one of them from a recent
(public) dump; peered them and - kaboom!

I asked on this list (ref: below) about any knobs to tune recon and/or
db - similar to the knobs available for build/fastbuild but had no
response. If anybody knows/finds that information it would be a great
help to us all to post it to the list.

http://lists.gnu.org/archive/html/sks-devel/2009-01/msg00012.html
--
John Marshall
Ryan Hunt
2009-02-16 17:48:11 UTC
Permalink
I don't know what is wrong with my system, I rebuilt the DB from my old
dump.. brought it online and got updated from my peers... and then get
stuck in a new loop trying to get new keys.

I have disabled rcon at the moment to keep from hurting my peers.

Any ideas? I have verified all permissions, created a new database. It
got ~14k keys from my peers when I brought the new db online and all
those keys seem to be in the database.

Here are the last 10mins worth of rcon.log:

2009-02-16 10:28:35 Requesting 3 missing keys from <ADDR_INET
72.190.107.50:11371>, starting with 24930CB60AA1C6C4B1226B120EA8DE56
2009-02-16 10:28:35 3 keys received
2009-02-16 10:29:31 <recon as client> error in callback.:
Sys_error("Connection reset by peer")
2009-02-16 10:30:35 Hashes recovered from <ADDR_INET 114.31.78.196:11371>
2009-02-16 10:30:35 24930CB60AA1C6C4B1226B120EA8DE56
2009-02-16 10:30:35 38DFFFFA1F141DF06D9533038B5AEC83
2009-02-16 10:30:35 4F87D632F4FFC77E1A58593CF9D5C62D
2009-02-16 10:30:35 6B8DBCBF72AFBC8B72ABEB727E941036
2009-02-16 10:30:36 Requesting 4 missing keys from <ADDR_INET
114.31.78.196:11371>, starting with 24930CB60AA1C6C4B1226B120EA8DE56
2009-02-16 10:30:37 4 keys received
2009-02-16 10:30:37 Added 2 hash-updates. Caught up to 1234805437.493819
2009-02-16 10:31:21 Hashes recovered from <ADDR_INET 131.215.176.75:11371>
2009-02-16 10:31:21 24930CB60AA1C6C4B1226B120EA8DE56
2009-02-16 10:31:21 38DFFFFA1F141DF06D9533038B5AEC83
2009-02-16 10:31:21 4F87D632F4FFC77E1A58593CF9D5C62D
2009-02-16 10:31:31 Requesting 3 missing keys from <ADDR_INET
131.215.176.75:11371>, starting with 24930CB60AA1C6C4B1226B120EA8DE56
2009-02-16 10:31:31 3 keys received
2009-02-16 10:32:20 Hashes recovered from <ADDR_INET 84.16.235.61:11371>
2009-02-16 10:32:20 24930CB60AA1C6C4B1226B120EA8DE56
2009-02-16 10:32:20 2B3BC55DD351B9B2381355C16016BC81
2009-02-16 10:32:20 38DFFFFA1F141DF06D9533038B5AEC83
2009-02-16 10:32:20 4F87D632F4FFC77E1A58593CF9D5C62D
2009-02-16 10:32:20 6D12458590911F13DDC6DBAC60E37E1C
2009-02-16 10:32:20 AD6C2B2CE9D7AAC71F1EC5C671D260BC
2009-02-16 10:32:22 Requesting 6 missing keys from <ADDR_INET
84.16.235.61:11371>, starting with 24930CB60AA1C6C4B1226B120EA8DE56
2009-02-16 10:32:23 6 keys received
2009-02-16 10:32:23 Added 3 hash-updates. Caught up to 1234805543.172267
2009-02-16 10:33:16 Hashes recovered from <ADDR_INET 195.111.98.30:11371>
2009-02-16 10:33:16 24930CB60AA1C6C4B1226B120EA8DE56
2009-02-16 10:33:16 38DFFFFA1F141DF06D9533038B5AEC83
2009-02-16 10:33:16 4F87D632F4FFC77E1A58593CF9D5C62D
2009-02-16 10:33:26 Requesting 3 missing keys from <ADDR_INET
195.111.98.30:11371>, starting with 24930CB60AA1C6C4B1226B120EA8DE56
2009-02-16 10:33:26 3 keys received
2009-02-16 10:34:15 Hashes recovered from <ADDR_INET 195.111.98.30:11371>
2009-02-16 10:34:15 24930CB60AA1C6C4B1226B120EA8DE56
2009-02-16 10:34:15 38DFFFFA1F141DF06D9533038B5AEC83
2009-02-16 10:34:15 4F87D632F4FFC77E1A58593CF9D5C62D
2009-02-16 10:34:25 Requesting 3 missing keys from <ADDR_INET
195.111.98.30:11371>, starting with 24930CB60AA1C6C4B1226B120EA8DE56
2009-02-16 10:34:26 3 keys received
2009-02-16 10:35:21 Hashes recovered from <ADDR_INET 72.190.107.50:11371>
2009-02-16 10:35:21 24930CB60AA1C6C4B1226B120EA8DE56
2009-02-16 10:35:21 38DFFFFA1F141DF06D9533038B5AEC83
2009-02-16 10:35:21 4F87D632F4FFC77E1A58593CF9D5C62D
2009-02-16 10:35:21 Requesting 3 missing keys from <ADDR_INET
72.190.107.50:11371>, starting with 24930CB60AA1C6C4B1226B120EA8DE56
2009-02-16 10:35:21 3 keys received
2009-02-16 10:36:20 Hashes recovered from <ADDR_INET 195.111.98.30:11371>
2009-02-16 10:36:20 24930CB60AA1C6C4B1226B120EA8DE56
2009-02-16 10:36:20 38DFFFFA1F141DF06D9533038B5AEC83
2009-02-16 10:36:20 4F87D632F4FFC77E1A58593CF9D5C62D
2009-02-16 10:36:20 944A73D73698FFBFBA74B159838DC9B3
2009-02-16 10:36:21 Requesting 4 missing keys from <ADDR_INET
195.111.98.30:11371>, starting with 24930CB60AA1C6C4B1226B120EA8DE56
2009-02-16 10:36:22 4 keys received
2009-02-16 10:36:22 Added 1 hash-updates. Caught up to 1234805782.306689
2009-02-16 10:37:23 <recon as client> error in callback.:
Sys_error("Connection reset by peer")
2009-02-16 10:38:25 <recon as client> error in callback.: Unix error:
Connection refused - connect()
2009-02-16 10:39:23 <recon as client> error in callback.:
Sys_error("Connection reset by peer")
Phil Pennock
2009-02-17 00:01:59 UTC
Permalink
Post by Ryan Hunt
I don't know what is wrong with my system, I rebuilt the DB from my old
dump.. brought it online and got updated from my peers... and then get
stuck in a new loop trying to get new keys.
I have disabled rcon at the moment to keep from hurting my peers.
Any ideas? I have verified all permissions, created a new database. It
got ~14k keys from my peers when I brought the new db online and all
those keys seem to be in the database.
ktrace/strace it, see where it's failing.

Separately, the current problems are more than just you.

I restarted recon to pick up a couple of admin changes (ulimits, reopen
fds 0,1,2 on /dev/null so they're not revoked in the process) and got to
see the memory and RSS usage shoot up. CPU was pegging out. When it
finished, the memory/RSS stayed high but the CPU dropped back down to 0.

This maxed out at 811MB memory, so is much lower than before.

While this was happening, I hit the recon process with lsof and saw that
it was connected to [217.66.26.140], which is bazon.ru. This is the
peer who only has a few tens of thousands of keys.

Azamat, did you ever start from a full keydump?

- -Phil
Ryan Hunt
2009-02-17 00:21:51 UTC
Permalink
strace & corresponding recon log available @

http://nayr.net/sks/

Let me know if anymore information is needed, I since I ran this as root
my database permissions changed.. is that normal?

Cheers,
- -R
Phil Pennock
2009-02-17 01:51:24 UTC
Permalink
Post by Ryan Hunt
http://nayr.net/sks/
Let me know if anymore information is needed, I since I ran this as root
my database permissions changed.. is that normal?
Whatever last created a .db file owns the file. You can chown them
without problem, but you'll obviously need to do so before starting the
program which runs as non-root (and running tests as root means that any
permissions failures won't show up, so is more prone to heisenbugs).

Eg, to create an svn repository using a bdb backend, I can use svnadmin
as root then chown -R to the runtime user behind svn. Similarly for
restoring an OpenLDAP setup using slapadd.

I don't know sks well enough to be sure, but it looks as though your
recon process is successfully dumping the keys from the remote side to
diff files in /var/spool/sks/diff-$IP.txt but then timing out when
talking to the db server on /var/run/sks/db_com_sock; however, your
PTree lives under /var/lib/sks/PTree/.

Are /var/lib/sks and /var/spool/sks the same directory? I thought that
only basedir and cwd were used and the individual DBs couldn't be
relocated individually. Has the cwd of the processes when they start
changed?

- -Phil
Ari Trachtenberg
2009-02-17 02:33:42 UTC
Permalink
-----BEGIN PGP SIGNED MESSAGE-----
Hash: RIPEMD160
I restarted recon to pick up a couple of admin changes (ulimits, reopen
fds 0,1,2 on /dev/null so they're not revoked in the process) and got to
see the memory and RSS usage shoot up. CPU was pegging out. When it
finished, the memory/RSS stayed high but the CPU dropped back down to 0.
This maxed out at 811MB memory, so is much lower than before.
While this was happening, I hit the recon process with lsof and saw that
it was connected to [217.66.26.140], which is bazon.ru. This is the
peer who only has a few tens of thousands of keys.
This could cause a problem. Remember that the memory scales with delta
... so if delta is high, memory usage will be as well. This is usually
not a problem for hard disk usage, but it could be a problem for RAM.

The simplest solution would be to switch modes when the number of
differences is astronomical and simply do a wholesale transfer of data.

Best,
-Ari
Azamat S. Kalimoulline
2009-02-17 23:25:59 UTC
Permalink
Post by Phil Pennock
While this was happening, I hit the recon process with lsof and saw that
it was connected to [217.66.26.140], which is bazon.ru. This is the
peer who only has a few tens of thousands of keys.
Azamat, did you ever start from a full keydump?
No. Isn't.
Is it mandatory?
I think that my reconcilation server isn't work and do not worry about it. So,
if so, I'll do full keydump tomorrow.
--
WBR Turtle//BAZON Group
Azamat S. Kalimoulline
2009-02-17 23:50:21 UTC
Permalink
Post by Phil Pennock
Azamat, did you ever start from a full keydump?
Hm... Where can I find full keydump?
Recommended http://nynex.net/keydump/ in Debian doc looks like empty...
--
WBR Turtle//BAZON Group
Alex Roper
2009-02-17 23:53:38 UTC
Permalink
You can find a keydump here: http://www.ugcs.caltech.edu/~pgp/backups/20090215/

Alex
Post by Azamat S. Kalimoulline
Post by Phil Pennock
Azamat, did you ever start from a full keydump?
Hm... Where can I find full keydump?
Recommended http://nynex.net/keydump/ in Debian doc looks like empty...
------------------------------------------------------------------------
_______________________________________________
Sks-devel mailing list
http://lists.nongnu.org/mailman/listinfo/sks-devel
Jonathan Oxer
2009-02-17 23:56:31 UTC
Permalink
Post by Azamat S. Kalimoulline
Hm... Where can I find full keydump?
Recommended http://nynex.net/keydump/ in Debian doc looks like empty...
There are some sources listed (and a procedure for loading them) at
www.keysigning.org/sks

Cheers :-)
--
Jonathan Oxer
Ph +61 4 3851 6600
Geek My Ride! <http://www.geekmyride.org/>
John Clizbe
2009-02-18 00:09:29 UTC
Permalink
Post by Azamat S. Kalimoulline
Post by Phil Pennock
Azamat, did you ever start from a full keydump?
Hm... Where can I find full keydump?
Recommended http://nynex.net/keydump/ in Debian doc looks like empty...
Try
ftp://ftp.pramberger.at/services/keyserver/keydump/
or
ftp://ftp.prato.linux.it/pub/keyring/

A good guide is at http://www.keysigning.org/sks

recon appeared not to be working because there were too many keys
different between your server and the rest of the network.


- --
John P. Clizbe Inet:John (a) Mozilla-Enigmail.org
You can't spell fiasco without SCO. hkp://keyserver.gingerbear.net or
mailto:pgp-public-***@gingerbear.net?subject=HELP

Q:"Just how do the residents of Haiku, Hawai'i hold conversations?"
A:"An odd melody / island voices on the winds / surplus of vowels"
Azamat S. Kalimoulline
2009-02-26 04:30:05 UTC
Permalink
Post by Phil Pennock
Azamat, did you ever start from a full keydump?
Now should be all ok.
--
WBR Turtle//BAZON Group
Yaron Minsky
2009-02-15 15:18:58 UTC
Permalink
On Sat, Feb 14, 2009 at 6:14 PM, Phil Pennock
Post by Phil Pennock
I begin to wonder if recon is sub-optimal with a large delta of keys to
send and also to wonder if I should bump "learn to read OCaml" up my
priority list -- I'm managing to navigate the sks source faster already,
but I'm still mostly in the dark.
For what it's worth, I would strongly encourage this. I have not had much
time of late to support sks, and having someone else learn the codebase
would be great. I'd be happy to help out if you wanted to learn more and
had questions about the internals...

y
Kim Minh Kaplan
2009-03-22 14:14:02 UTC
Permalink
I was seeing this in recon.log:

2009-03-22 08:34:10 Error getting missing keys: Out of memory

This is because the server makes use of the response even though the
request failed. It then proceed onto erronously alloc huge amount of
memory (which sometimes fail). On most systems this is not very serious
as the memory is never used so never really alloced. This fixes it,
i.e. it does not try to use the result of failed requests.

Kim Minh.

Loading...