<<

. 17
( 41 .)



>>

not accommodate ¬rewalls continued to shrink. We just had to ¬nd
a way to make our systems work with ¬rewalls and we knew that we
had to ¬nd a way to do it without requiring special work by ¬rewall
administrators. If only we could ¬nd a way to make DESCHALL com-
munications look like something that the ¬rewalls already knew about
and were con¬gured to allow, we could get so many more clients run-
ning.


Thursday, April 3, 4:55 P.M.
Lawrence Livermore National Laboratory, California

As Karl Runge read his e-mail, he came across a message that Guy
Albertelli from Ohio State had posted a few hours earlier. Albertelli
looked through the DESCHALL project statistics for April 2 and found
on the report a single system that tested over 700 billion keys and asked
if anyone knew what kind of a system could test so many keys.
Runge did some quick calculations in his head and thought that it
might be one of the latest Intel Pentium systems, with four processors
running at 200 MHz each. Even that did not seem quite right, and he
thought that it was more likely a system that was actually acting as a
proxy or go-between for a group of systems that could not talk directly
110 CHAPTER 14

to the keyserver. Runge knew something about how individual project
participants might introduce an architectural components like a proxy
to overcome obstacles they faced in getting their systems to be able to
participate in the DESCHALL project.
Five days earlier, Runge worked out a scheme to allow his computers
at home to talk with the keyserver. His home local area network (LAN)
had several systems on it and the entire LAN was connected to the
Internet over a modem. That usually served Runge™s needs very well,
but sometimes UDP datagrams like the DESCHALL messages would
not make it to their destination”UDP was at its most unreliable when
trying to work over networks with severe bandwidth limitations, as
could sometimes be the case when several computers were sharing the
bandwidth of a single modem. Runge needed to ¬nd a reliable way to
get DESCHALL messages from his computers running the clients on
his home network to the network in his laboratory, where his modem
connected.
Runge™s solution was a pair of programs written in the Perl pro-
gramming language. The ¬rst program would accept messages from
the DESCHALL clients on his network in their usual UDP format and
convert the message TCP format. (This would be like taking a message
written on a postcard and putting it into an envelope.) The program
would then forward the TCP message over Runge™s modem link to an-
other system that he had at work. That system was running another
program”one that would take the message out of its TCP format and
put it back into UDP, and forward it on to the real keyserver. Responses
from the keyserver would go through the same process in reverse.
The system that actually handed the message to the keyserver would
be the one that the keyserver thought was the client. Thus, both of
Runge™s machines would look like a single client to the keyserver. Runge
thus concluded that whoever processed 700 billion keys the day before
probably had a setup like his rather than a single top-of-the-line sys-
tem with multiple processors doing nothing but running a DESCHALL
client.
Unbeknownst to Runge, Justin Dolske and I had been privately talk-
ing about the same thing with Rocke Verser. Our system for working
through ¬rewalls was a straightforward one, almost identical to the one
that Karl Runge created for his own system. Justin Dolske wrote a pair
of programs: one was called U2T (UDP to TCP). Instead of converting
a UDP datagram into a TCP packet, U2T formatted the data to look
Architecture 111

like a Web request”a HyperText Transfer Protocol (HTTP) message
carried inside of a TCP packet. The other was called T2U and con-
verted the TCP-based HTTP message back into a UDP datagram to
be forwarded to the keyserver.
The Web uses HTTP to communicate. HTTP is a higher-level pro-
tocol than TCP and UDP: it relies on foundation provided by “lower-
level” protocols. Protocols like TCP and UDP will get the data you
need sent from one system to another in the appropriate chunks, ac-
tually carried across the Internet infrastructure inside of IP packets.
HTTP de¬nes the format of the message itself. It would be the equiv-
alent of the sender and recipient deciding that when they are sending
messages back and forth, they™re always going to have some lines at the
top like To, From, and Date. They would also need to agree to write in
the same language. This is the role of HTTP.
So in practice, what happens is that Web tra¬c is formatted in
HTTP, carried in TCP packets, which are in turn carried in IP packets.
Participants who wanted to run clients for DESCHALL behind their
corporate ¬rewalls would download U2T and run it on a machine on
the same internal network as the clients”behind the ¬rewall. The user
would then tell the U2T server the address of the ¬rewall system used to
forward Web requests from internal systems to external (i.e., Internet)
Web sites. Once the U2T system was con¬gured and started, it would
start listening for UDP datagrams that had DESCHALL client requests
in them.
After the U2T system was running, the users would then start their
DESCHALL clients, but instead of telling the clients to use the real
DESCHALL keyserver”to which the ¬rewall would block access”the
user would tell the DESCHALL client that the U2T server was the
keyserver. That would start the client, which would contact the U2T
server and ask for keys to test.
The U2T server would receive the message in a UDP datagram
from client and put exactly the same message in the form of an HTTP
message, which would get put into a TCP packet, which would get
put into an IP packet, and then sent to the ¬rewall with an ultimate
destination of one of the three T2U servers that Justin Dolske and I
ran at Ohio State and Megasoft, respectively. Each T2U server had the
same functionality as the others; we just used three servers so we could
spread the load among more systems.
112 CHAPTER 14

The T2U server would listen for HTTP messages inside of TCP
packets from a T2U server. Just as the participant™s U2T server took
the client request out of the UDP datagram and put it into an HTTP
message, the T2U server would take the client request out of the HTTP
message, create a new UDP datagram, put the client request into that
datagram, and then send that datagram to the keyserver.
Next, the keyserver would receive the client request in a UDP data-
gram that looked like any other client request. It would accept the
request, and send the result back to the T2U server that sent the
request”in the form of a usual DESCHALL message in a UDP data-
gram.
The T2U server would accept the response from the keyserver,
pulling the message out of the UDP datagram, creating a new HTTP
message and putting the response from the keyserver into that HTTP
message. That HTTP message would then be sent back to the U2T
server, which would then pull the response out of the HTTP message,
put it back into a new UDP datagram which would then be sent to the
original client.
Adding the T2U servers to the architecture and distributing U2T
software that participants would run behind their ¬rewalls satis¬ed all
of our requirements: DESCHALL could work safely through ¬rewalls
without requiring any changes to be made in the ¬rewalls, the DES-
CHALL clients, or the keyserver.
Having addressed the issue of how to help clients behind ¬rewalls to
participate, we were ready to charge ahead. With quadrillions of keys
left to test, we were going to need the help of people behind corporate
¬rewalls.
15
Progress




Wednesday, April 9, 8:09 P.M.
Yale University, New Haven, Connecticut

Computer science student Jensen Harris had two machines with some
extra processing power available. Like most student-owned machines,
they weren™t spectacularly powerful. Both had Pentium processors, one
90 MHz and the other 150 MHz”but they spent most of their time
sitting idle, so Harris thought it would be a good idea to contribute
their idle cycles to the DESCHALL e¬ort.
Computer microprocessors come in many varieties, such as Intel™s
Pentium, the PowerPC from IBM and Motorola, and Sun Microsys-
tems™ SPARC family of processors. While the details on what exactly
happens inside vary dramatically from one family of processors to an-
other, all processors essentially work the same way: an instruction is
given and the processor responds by performing calculations or mov-
ing data from one place to another. Processors have a “cycle””a tiny
period of time during which an instruction can be executed.
“Hertz” is the metric measurement of frequency, named in honor of
German physicist Heinrich Rudolf Hertz, who made several important
contributions in the ¬eld of electromagnetism. One hertz (1 Hz) is one
cycle (or event) per second. One kilohertz (1 kHz) is one thousand
cycles per second. One megahertz (1 MHz) is one million cycles per
second. One gigahertz (1 GHz) is one billion cycles per second.
Processor clock speed can be a useful measurement to compare pro-
cessors of the same family to each other”a 150 MHz Pentium is about
60 percent faster than a 90 MHz Pentium. The problem with clock

113
114 CHAPTER 15

speeds is that they are almost useless for comparing processors of dif-
ferent types to each other; there is no way to tell how a 100 MHz
PowerPC processor compares to a 100 MHz Pentium, since what each
processor will be able to do in a given cycle will vary dramatically.
Some processors, like the PowerPC will do a lot of work in a cycle,
while others like the Alpha will do very little in a single cycle.
Watching the DESCHALL client run was a great way to get a sense
of how many di¬erent things contribute to just how much work di¬er-
ent computers can accomplish in a given period of time. Because the
DESCHALL client would simply test one key right after another with-
out waiting for input from the user or anything else, the client would
just run and run as fast as the processor could support it. The proces-
sor™s clock speed will matter to overall system speed, but so will the
amount of work that can be done in a given cycle. How much can be
done in a cycle depends on things like how the processor is designed
and just how well the software can take advantage of that design. Rocke
Verser™s hand-optimized DES key testing software for the Pentium pro-
cessor was so fast because he was able to get more work out of each
processor cycle.
Looking over the project statistics for the past few days, reproduced
in Table 4 (on page 89), Jensen Harris saw how powerful DESCHALL
had become. At this point, we were testing 2 trillion keys per day, by
comparison to the 496 billion keys we were testing per day less than
six weeks earlier. Harris wondered about the value of having his two
mid-range desktop computers working on the project. It was clear that
there were many of other locations doing a lot more work. But at what
point, Harris asked in a message written to the DESCHALL mailing
list, does a contribution become too small to be worth the e¬ort?
DESCHALL participants answered Harris™ question resoundingly:
every little bit helped. Unless the keyserver simply could not keep up
with demand for instructions from key-testing clients, even the slowest
of machines was valuable.
With Rocke Verser™s fast Pentium software, however, the danger of
any Pentium machine ever becoming a burden was nonexistent”the
DESCHALL key testing software ran much more slowly on many other
systems that used other processors. Those slower clients would become
a burden on the keyserver long before even the slowest of Pentium pro-
cessors. With the lightweight UDP-based protocol for communication
between clients and servers, the likelihood that the keyserver wouldn™t
Progress 115

be able to support the load put on it by the number of clients we had
was also pretty low.
Lee Sonko from Bowne Global Solutions was in agreement with the
rest of the mailing list participants, but he wanted to see just how much
the small contributors mattered to the project.
Using the statistics from April 8, Sonko was able to show how much
of the day™s work was performed by, as he called them, “big boys,”
“medium-sized domains,” and “small domains.” The “big boys” were
the ¬ve domains testing a trillion or more keys per day. The “medium-
sized domains” were the 55 domains testing between 69.8 billion and
1 trillion keys per day. The “small domains” were the rest”108 do-
mains testing between 16 million and 66.8 billion keys for the day.
After breaking them up into those three groups, he totaled the number
of keys tested, not per domain, but per group. (Table 5 summarizes his
¬ndings.)

Group Keys Processed
Big Boys 11.16 Trillion
Medium 10.95 Trillion
Small 1.83 Trillion
Total 23.95 Trillion
Table 5. Work Performed by Size




Although the small domains were testing only one tenth of the num-
ber the large domains were testing, it was clear that their contributions
mattered. None of the DESCHALL participants wanted to lose any pro-
cessing power at all.
A total of almost 24 trillion keys were processed on April 8 by all
DESCHALL participants together. In the same day, roughly 1326 ma-
chines (as determined by unique IP addresses, a reasonable but inexact
approximation) worked on the project. That would mean that on av-
erage, each machine processed roughly 18 billion keys that day.
We used a relatively modest system as our “benchmark” to measure
how fast a “typical” machine would work its way through the DES
keyspace. That benchmark system was a 90 MHz Pentium running
FreeBSD, which ran at a rate of 454,000 keys per second. In a 24-hour
day, that machine would process some 39 billion keys. Thus, a relatively
modest 90 MHz Pentium computer, working all day, would be faster
116 CHAPTER 15

than average, meaning that it would increase the average speed per
host.
We were nowhere near running at the level of that any client would
prevent any other from getting work done. Not only were we not ready
to ask people with the slower systems to stop participating, we wanted
more clients, and we needed as many as we could get. Testing 72
quadrillion keys was a job for a lot of processors”the more, the better.
As Justin Dolske observed, “Quantity has a quality all its own.”




While we were also working to support participation through ¬rewalls,
other projects were still searching for the key. The European SolNET
e¬ort was still going strong. The DES Violation Group was also still
running, though by this point they were falling further behind. Each of
these e¬orts were, in one sense, competing with our e¬ort, since they ran
their own keyservers and their client software was di¬erent from ours.

<<

. 17
( 41 .)



>>