key retrieval is just barely faster with judy, although if you need the keys sorted judy is a clear win (since it stores them internally in a sorted form already)
[0.085982] JUDY: read 980 sorted keys from mda to mdc
[1.327951] HASH: read 981 unsorted keys from mda to mdc
[0.00025] JUDY: read 100 sorted keys from mda to mdc limit 100
[0.907928] HASH: read 100 unsorted keys from mda to mdc limit 100
[0.085711] JUDY: read 194 sorted keys from mda to mdc match /4/
[1.359322] HASH: read 194 unsorted keys from mda to mdc match /4/
the new search method in 0.05 really lets judy arrays shine where sorted results are required. if you have some data where the sort order is important, you can retrieve subsets of the keys/values in a fraction of the time it would take you with a hash - with the hash you'd have to scan each key, then sort after you'd collected your results.
[1.161717] JUDY: read 191777 sorted keys where value matches /4/
[1.691851] HASH: read 191777 unsorted keys where value matches /4/
even the value_re mechanism inside the search method is useful - it takes less time to return all the matching keys than it does to return the keys then do the matching.
synopsis: excellent in applications with large datasets where the data will persist for a while and must be kept/returned in an alphanumeric sorted form.
spitting out pagination data would seem a particularly well suited task, as you could return 100 records and quickly return the next 100 upon the next request.
Performance was 10x slower than native perl hashes for reading however writing to the judy hash was blazingly fast and memory consumption was extraordinarily low for large hashes. The Makefile.PL could use a warning and a link if you do not already have the judy library installed.