Rough Transcript of W3C Security Group Meeting 22 Feb 95

These are rough notes taken by Tony Eng, Karen MacArthur and Tal Rabin.

Welcome from Al Vezza


ISOC Overview

Ron Rivest(MIT) described several papers which addressed topics including:

There were also two panel discussions: on Internet Payment Systems, and Mosaic and WWW.

In the future, a URL pointer to an online copy of the proceedings will be made available.


IBM's "iKP"

IBM presented two proposals for W3C consideration: "iKP: A Family of Payment Protocols" and a proposal for "Snap-In" security architecture.

Juan Garay presented iKP (paper is available but still under development), which is a scheme that uses public key technology to enact electronic payment. Certain requirements are addressed including the privacy of orders (e.g. date, quantity, etc), anonymity (e.g. who made the order), etc. It is an interactive protocol in which the customer communicates with the merchant, who in turn communicates with the Acquirer (e.g. Bank or Credit Card company). The i in "iKP" refers to the number of public key pairs involved. The value i=1 represents the protocol scenario in which only the Acquirer's keys are used. Similarly, i=2 and i=3 reflected added use of the Merchant's and Customer's keys respectively. As the value of i increases, there is a higher degree of security, more and more of the protocol requirements are fulfilled, but at the cost of a higher computational overhead.

IBM has said that it is willing to discuss and listen to suggestions and comments on iKP. IBM was concerned that these should be open payment protocols. It was therefore prepared to put its work in that area on the table as a basis for the generation of a standard within W3C. comments and suggestions on iKP were welcome.

"Snap-In" Security

Marc Kaplan(IBM) presented "Snap-In" security, which has the following goals:

The design is pluggable and modular, necessitating the addition of a snap-client and crypto & key cache on the client end, and a snap-post, crypto component and snap-it on the server side.

The idea is that the added modules on the client end is proxy-like and act as an intermediate; messages from Mosaic are rerouted through the snap-in modules, so that sensitive information can be encrypted,etc.

But the snap-client needs information like the TCP-port number; so additional flows are necessary in which Mosaic performs some preliminary communication with the httpd server.

Discussion

Allan Schiffman (EIT) mentioned that he had considered this idea before and asked about using actual proxies instead of using a co-hosted server and redirection. The reason for using the latter is so that the browser doesn't have to understand (how to deal with, etc) actual proxies.

Mary-Ellen Zurko(OSF) raised a concern about pass phrases and security, and whether the swap-client had to be protected from other users in the case of multi-user hosts. Marc Kaplan replied that at this point, they were assuming that there is only a single user on a host and that the local port can't be spoofed.

Tim Berners-Lee(MIT) pointed out that a user should be made aware in certain cases of the level of security of certain information that might be sensitive. Also, the passphrase is important against spoofing because when dealing with proxies, everything is spoofable.

Ron Rivest(MIT) asked about how passphrases are installed. The basic idea is that the swap-client authenticates itself to you by displaying your passphrase.. something that is not usually done.

The presence of this intermediary could also be used other things like filtering, monitoring, etc.


Offer from Steve Jobs/Richard Crandall (NeXT)

NeXT has a fast elliptic encoding technology which has never been used. It is an alternative to RSA and is public key. It is also "less likely to be crackable than RSA" (although it has not been around as long as RSA has). NeXT is prepared to donate it to the W3C if there is enough interest.

Comments

Charlie Kaufman(IRIS) was concerned about exportability issues.

Ron Rivest believes that no product currently has this, so perhaps there is no export license. He also mentioned at some point that he thinks that we should design algorithmic-independent protocols.

Allan Schiffman wondered if this handles authentication and encryption.

Tim Berners-Lee mentioned that the possibility of using a different scheme is good because the web protocols should be flexible enough to handle different formats, etc. He also mentioned that the elliptic encoding would still be covered by the Public Key Patent.

A discussion concerning patent and licensing issues ensued, and there was curiosity as to what RSA's reaction to NeXT's public key system would be.

Tim Berners-Lee asked if we should survey the different PK schemes, to examine patent and other potential issues/problems.

Steve Kent(BBN) pointed out that even if we have another algorithm, that has the exact same external properties of RSA and maybe even additional criteria, we still may not have achieved algorithmic independence. We need to separate the functions, tackling the key exchange and digital signatures functionalities separately. Merely having a place holder for algorithm identifiers doesn't cut it. One possibility is to pick protocols/schemes that are qualitatively really different.

Bede McCall(MITRE) mentioned Tessera and Capstone. Consequently, the following were listed on the whiteboard: RSA, FEE, Capstone, Kerberos and DES. It was noted that Kerberos is not an algorithm in the same sense that RSA is, and that from thist list, one can create a table for describing the different abilities of each algorithm.

Chuck McManis(SUN) mentioned that if there is support for many algorithmic systems, then there is a potential for a large key management problem since keys would have to be maintained for various servers, users, and now for the different systems.

But Allan Schiffman pointed out that because of US export laws, which are oblivious to the strength of authentication but not key exchange, we already have different keys lying around for different purposes.

So, a problem is that if we generalize too much, the result becomes hard to manage; however, if we don't generalize enough, it may be difficult to accommodate new protocols/standards that may come along. Flexibility and extensibility is an issue.

Certificate Formats

Allan Schiffman mentioned that SHTTP supported PKCS-6 and not X509-88 because extended attributes could be specified for the former.

Bede McCall thought that NSA would be unhappy if X509 Version 3 were used instead.

It was mentioned that PKCS-6 seemed to be a subset of X509 Version 3; there was then some discussion about whether or not one could convert from one format to another. Allan Schiffman answered this in the negative because it would destroy any cryptographically enhanced part of the message as a result. However, he noted that one could treat PKCS-6 extended attributes as optional things in the X509 V3 specification, so that would be okay. It is also possible to convert certificates without cryptographic transformation from PKCS-6 to X509-88.

Darren New(FirstVirtual) mentioned that HTTP must be compatible with MIME and other transmission formats, and a discussion about MIME and HTTP followed.


break

Export Restriction Politics..

During the break, a point was raised politics and the export restriction problem. Effort will be required to convince Congress that things have to change. Would W3C be interested in making a joint statement to the American Government if one was drafted to make a stand. Is this an appropriate action? Would it be effective?

Allan Schiffman mentioned that American software companies may provide opposition because they will be at a disadvantage.

Jerry Waldbaum(IBM) said that "Sun and IBM would probably be ready to take this forward on an individual and joint basis" for three reasons:

  1. American industry otherwise would be hurt,
  2. The unexportable technology is already available outside the U.S.
  3. It will force electronic commerce system to the lowest common protocol, and that will jeopardize a lot of financial transactions.

A directory will be created to house a set of well reasoned individual standpoints.

It seems that the government wants to know how much the American economy could lose because of export restrictions.

In any event, there seemed to be a lot of enthusiasm, and everyone was invited to find and make known pointers to relevant information. In particular, Alan Schiffman will supply a URL to the National Research Council that might have relevant information on their crypto policy study).

Discussion of Requirements and Architecture

What is the architectural difference between negotiation and security.... should we separate them? It was mentioned that we our focus on http security is central but not sufficient, but that other protocols should also fit the architecture.

Steve Kent mentioned the multicast aspect of email, and the issue of using email security designs here. The efficiency concern of email may apply here, but we must be careful when borrowing from the secure email environment. For example, sequencing in the email environment isn't a great concern, but in the transaction community, it is. So, we should consider the transaction nature, etc, and not put undue emphasis on capitalizing on instances of things done in other contexts.

Tim Berners-Lee agreed, adding that due to the possibility of proxies, the protocol would need to do all kinds of things, and Allan Schiffman voiced a concern for "preenhancement" support.

Darren New expressed concern that http is based on obsolete version of MIME security.

Allan Schiffman pointed out that one fundamental difference between email and http is the notion of latency. To which, Steve Kent added that the security protocol is designed for realtime point-to-point usage while email is not realtime. Also, option negotiation was not considered during secure email design. So, basically, http should be a superset of what you can do in email.

Marc Kaplan issued a warning that the number of flows should be cut to a minimum when trying to achieve the required function.

Juan Garay mentioned the IPSec discussion of interactive and noninteractive key management protocols. Steve Kent then expressed concern about effort duplication for certain issues already being considered by other groups. Allan Schiffman pointed out that there might not be all that much overlap between IPSec and W3C. Steve Kent added that IPSec is attacking the problem at the secure conduit level.

Tim Berners-Lee suggested another angle to MIME -- with MIME, we pick up strange syntax; currently HTTP has SGML and MIME, and neither has a BNF description. The nice thing is that we can read it and dump it; it's easier to debug and understand. He then asked if there were any feelings on whether or not the protocol should stick with rfc822 formats?

Steve Kent has a perception that there is resistance to ASN.1 encodings because of the overhead involved. However, BBN just released a C++/ASN.1 compiler. Said, not to be prejudiced with previous bad experiences using tools available before that might be faulty.

NOTE: a "distinguished encoding" resolves ambiguities in the basic encoding, so that the same object encoded twice using a distinguished encoding will have the same signature.

Allan Schiffman mentioned that the accommodation of MIME by existing web clients is pretty haphazard; these clients only basically use the bare minimum like the content-type and content-length fields, and a lot of MIME things aren't supported. Darren New commented that "that will rapidly change". There are already problems with mailtool links, conversion of things (e.g. encrypted things) to MIME formats -- we need compatibility; don't want to have one key for signing mailtool links, and another key to sign other things.

John Klensin(MCI) stressed that MIME shouldn't be used for inner parts of nested structures. And Tim Berners-Lee asked how far down does one push MIME when say a request is sent and the server sends a complex reply including sgml, dtd and many figures...

Darren New was only suggesting that at least the top level be compatible with MIME (and not that we use MIME all over). Perhaps we should use MIME all the way down to where you can't unwrap it anymore -- i.e. no one can unwrap it beyond a certain point.

Allan Schiffman remarked that we could just have recursive structures (signed messages whose components are themselves signed inner objects).

Steve Kent said that if there is the opportunity to negotiate what you can do; then you don't have to, for example, send a whole bunch of possible things down to the other end and have the receiver then select what it wants to deal with.

John Klensin suggested watching discussions of embedding SGML into MIME by the mimesgml IETF Working Group for example.

Tim Berners-Lee added that MIME has a dictionary space that ASN.1 doesn't have. so should there be negotiation in general (e.g. to negotiate your interfaces)?

John Klensin commented that we need to generate something that is ODI-like (ISO's Open Document Interchange architecture) and not SGML-like. ODI was developed in parallel with SGML and is ASN.1 based, but it died a quiet death... the problem was that it was too closely tied and to difficult to extend without getting involved with registration authorities

Allan Schiffman reinterated the fact that when we talk about security, we must have a model in mind.

Chuck McManis(SUN) said that if security options are negotiated, there will be some implementations that negotiate the weakest-possible security options. He also wondered if access to the same plaintext encoded under several different cryptographic algorithmic options would be vulnerable to, say, a differential analysis attack (e.g. if you have the same plaintext encoded under DES and under RSA as well). Allan Schiffman responded that this is probably not possible, mentioning the possibility of making all plaintext different.

Martin Abadi asked what the outcome of the negotiation stage was. Tim Berners-Lee explained that you would negotiate algorithmic information (e.g. choice of RSA/FEE/Capstone/kerberos/DES, etc) just as possible representation types (content-type), natural language, compression means, are negotiated with SHTTP now.

Mary Ellen Zurko asked if this is orthogonal to whether or not there is security at the transport level. Steve Kent commented that there may also be orthogonality between security parameters negotiated at a lower layer and those at higher layer (e.g. in cases when browser has to tell you that some things are for your eyes only). So, there are times when the application may be ignorant of the security being provided, but user might not be.


lunch

Tim Berners-Lee briefly went over the significance of some of the conclusions from November's meeting. Basically, everyone recognized the functionality that SHTTP has, and everyone also recognized the need satisfied by SSL. Once a secure channel is set, many nice things can be done. Now, when dealing with proxies, there may be pre-signed things sitting on disk, so we need something that is flexible and can provide a secure channel as a subset of its functionality.

The desire is to merge the API of SSL with the SHTTP mechanisms. And a target for this afternoon is to set a timescale for the release of the W3C library.

Rick Schell(Netscape) announced that Netscape would make the reference implementation of SSL free for noncommercial use with some licensing terms. Will also be licensing for commercial use.

Implementations of SSL that do not belong to Netscape are royalty free. A reference client and reference server will be made available.

It was proposed that we get something running as fast as we can. Ron Rivest asked why is it "as fast as we can" and not "as right as we can". Tim Berners-Lee replied that we want to do both, but as fast to avoid a ODI repeat. Chuck Flink(ATT) says "we've got to do it right fast.."

Incidentally, the toolkit will be in the form of a library and this same library is used for both client and server.

Bede McCall asked about the plans/status with respect to IETF? SSL specifications were presented a few months ago, and according to Jeff Treuhaft(Netscape), an internet draft was also submitted. A discussion also ensued about RFCs and Internet Drafts.

Marc Kaplan mentioned that SSL is like secure channel, but what about the other requirements? Tim Berners-Lee replied that SSL functionality is currently used by some applications, but the November discussion had made it clear that more functionality was required of the future W3C protocol.

Marc Kaplan asked what the status of SSL would be. Tim Berners-Lee replied that the SSL protocol has not been endorsed by the W3C and pointed out that Netscape have committed to move to the W3C standard protocol. He proposed that the W3C protocol, even if derived from SHTTP, should be provided in a form which allows the secure channel functionality of SSL to be available at the API.

A question was raised concerning whether or not SHTTP could provide SSL functionality? Allan Schiffman remarked that SHTTP doesn't look like a secure channel protocol, but if you take an SSL communication and the complete transaction data, and you look at an SHTTP transaction which has been restricted to, say, encryption of message and mutual authentication, then you in fact get something that is hard to distinguish, in terms of security properties; i.e. it just looks like a matter of encoding differences. Given that this is the case, you could imagine a different API that didn't give you access to normal application concerns of where messages begin and end, etc.

Chunk Flink commented that we need a secure negotiation protocol, but not necessary one so closely tied to SHTTP (in terms of how it's presented and thought about). We may risk colliding with other groups that are making proposals on GSS, key management proposals, etc. Tim Berners-Lee replied that we don't want to go out of our way to conflict or reinvent; but also don't want to make it too restricted so it's unable to handle, say, callbacks because it was designed for only a simple request/response transaction.

Steve Kent pointed out a difference between what was mentioned up at the whiteboard and what was written in the requirements document. This has to do with the payload carried by TCP, which in the paper seems to go down to lower layers. If indeed it does go down to the IP layer (crossing that magic boundary), then this probably will not work at all. We need to nail down that boundary.

Allan Schiffman commented that at this level of discussion, the "raw/nested data" was meant to include complex structures, and that at this point and at this layer of the protocol, we're not making any assumptions of the data at all except maybe reliable delivery.

Steve Kent reminded us of the danger of replicating the functionality of something we already have (at a lower level for example). So, it might be wise to take out "IP packets" from the diagram.

Martin Abadi wants more specifics on what kind of functionality we'd want beyond a secure channel and if it is possible to extend SSL. Allan Schiffman replied that this has to do with "reference integrity"; that it's not raw data that is being passed, but rather something that is actually out of band data; and to accommodate that specifically takes some sort of extension (other extensions would be needed to meet other requirements). So there are two possibilities: 1) the protocol and application program take no stance on the meaning of those bytes or 2) the different types of payload have different meanings.

Mary Ellen Zurko said, "Given this dichotomy, which goes with this picture?" [referring to the diagram in the Requirements Draft].

Several criticisms were made of the requirements document; several people commented that it was incomplete, and that certain things were implicit and should be made explicit. Also, that the "raw/nested data" are really two separate things. Allan Schiffman remarked that one of the hardest things to do is to put together a cogent security requirements list.

Ron Rivest suggested we make a list of a few things we want to make sure we do. Also, he commented that this sounds a lot like a layered protocol, maybe that's not what we want; maybe just a reply and request model is enough; or maybe we need to think of it in an interactive sense, so each thing you want to do has its own negotiation and payment stage. So here is the question of do we negotiate per session, or just once and run everything over that?

Allan Schiffman made a very loose analogy to PEM -- all the stuff in the front of PEM and then all the data that the user cares about comes later (this analogy isn't perfect because in our protocols, the negotiation is interactive during negotiation and again, the issue here is latency).

Ron Rivest pressed for a notion of session and state.. What are the ground rules? What is the state information? Does this support all possible secure protocols?

Several points were clarified including the fact that payload can be interactive and bidirectional. Furthermore, that we should separate security processing from later data passing. Also, negotiation is not necessarily serialized with payload.

Steve Kent commented that if you establish negotiation first, it simplifies what you do afterwards. There is less chance for "unpleasant" surprises; less chance of mismatch of understanding between both parties. So first create serialized protocol and then once that is understood maybe can take it from there and piggyback for optimization, etc. Need to have clear latency and bandwidth requirements; need to be explicit about where the tradeoffs are being made.

Darren New observed that email is the degenerate case of our protocol, where there is no roundtrip delay - but when we introduce different algorithms, this gets more complicated.

Jerry Waldbaum(IBM) asked if the child of this union would be able to speak to either of its parents? If not, then he would have listed this as a requirement. Tim Berners-Lee clarified that more than one language will have to be spoken for a while, just as we already have ftp and gopher.

Tim Krauskopf(Spyglass) mentioned HTTP "containers" and asked to what extent the two approaches share code, etc. And how do these containers (i.e. RFC822 headers) affect the secure channel? What syntax and what negotiation would be used for this new protocol? It was mentioned that SSL has a binary syntax and that HTTP has different syntax, and that the new protocol would probably end up with some binary representation.

Scenarios Group

Harald Skardal asked what the top five candidate applications we'd want to use this for were. He suggested making a list of specific problems and candidate solutions. And perhaps from these, we can find things that we could combine. Steve Kent added that we should pick ones that are as different as possible understand their security requirements and extract from them things that seem important.

Jeff Treuhaft(Netscape), Don Young(DEC), Harald Skardal(FTP), Jeff Hostetler (Spyglass) and Dale Dougherty(ORA) volunteered to come up with different scenarios for which security on the web would be required. These should be completed within FOUR weeks; each member of this group should submit a list of proposed scenarios via email to w3c-scenarios@w3.org. Results will be collated and posted. In addition, Allan Schiffman mentioned a relevant URL which lists some example CommerceNet service scenarios.

Continuation of Discussion of SSL and SHTTP

Mike Dolan pointed out that two fairly different implementations exist; so, in the interim, will both endorsed? Tim Berners-Lee proposed that the final result be developed from SHTTP, but neither SHTTP nor SSL will not be endorsed as is. Mike Dolan then commented that since we are using a top down approach, we shouldn't immediately exclude SSL until we finish looking at the requirements, because we might later decide that SSL might be what we want.

Rick Schell said that we have this document that is a greatest common divisor between the two protocols, but the rationale behind SHTTP and SSL has not been documented.

Thomas Reardon(Microsoft) said it might be useful if the interfaces provided by W3C is SSL-looking (to make it easy for those using SSL to transition).

Rick Schell mentioned that so far, we've gotten a "solution" without defining or agreeing on the requirements.

Steve Kent mentioned that neither of the 2 we have so far is the solution. Create an API that is the intersection of the two. So that way I don't have to know which one I'm using. (API as an abstraction - can change things beneath them, without them knowing it). EIT is not sure this model makes anyone's job easier, but is at the very least, a political accommodation.

Jeff Treuhaft also mentioned the migration strategy problem.

Following this was a discussion on API's in which gateways and security were mentioned, as were OSF DCE clients, etc.

Donald Young asked if it might be necessary to form a working group to define an API. There was a suggestion that an API for the intersection of the two sets might be useful.

Jerry Waldbaum asked if any of the 5 problems can be solved by this union, would we be in any better shape? Allan Schiffman pointed out that SSL speaks to API independence and simplicity, yet SHTTP provides a broader set of security mechanisms; no one claims that the union will offer a larger set. Jerry Waldbaum proposed a show of hands to see if it would be appropriate to start with SHTTP as the initial design.

Others proposed that secure channel functionality should be solved by another group; perhaps the Secure IP group.

Darren New mentioned that a standard for getting certificate authorities to use some HTTP mechanism is vital. We also need some way to store keys and session information.

Chuck McManis suggested that the focus is in making the tools we use secure; the tools are primarily http servers and browser. This is one problem, separate from that of making the net secure. Wanted http stuff to be pursued here and network security pursued outside this group. Jerry Waldbaum asked if we believed another should address the issues of security other than http.

Steven Kent concurred and said that this group ought to focus on security in the web context; not in a general IP context (because there are other working groups). And in regards to the "User Awareness criteria" of the requirements, it is a GUI issue and is something that should be decided locally. Avoid "one size fits all" solutions.

Harald Skardal reiterated that drop-in modules will be very important.

Mike Dolan still has a problem with what to do today... (even though Allan and Rick are willing to join forces to create something). Proposed that we listen to a presentation from these folks on the merits of their technology and subsequently, make a decision in the short term of which to endorse. Tim Berners-Lee responded that this has already been discussed at the previous meeting and the results are available on the web.

Martin Abadi commented that it is difficult to figure out what is going on with the above protocols when reading the specifications. And reiterated the usefulness of having an API which provides intermediate specs of what the protocol is supposed to do, and gives an indication to the programmer of what to expect.

Chuck Flink asked how we could make web-servers secure? So, the problems of web security and network security are not totally disjoint.

Diego Cassinera(Delphi) commented that if 100% security is not attained, the W3C should make a statement and provide a timeline.

Adam Cain(NCSA) hasn't heard much about new extensions to SHTTP so people can make their own security modules.

Ron Rivest drafted up a list of possible HTTP scenarios.

  1. Information Exchange
  2. Payment Related Things
  3. Miscellaneous
It was mentioned that some of these might fall under the classification of "primitives" rather than "applications", e.g. bidding in an auction at Sotheby's is an application, while the simultaneous exchange of information is a primitive (from which specific applications could be built).

Should there be an API for the intersection of functionality for SSL and SHTTP? Is this useful? The consensus was not.

Should the group adopt SHTTP as it is now as the first step toward the secure protocol with later inclusion of missing functionality including SSL-style secure channels? Roughly half of those present thought so, no one opposed the idea.


break

Security Code Subgroup

We need to work on the requirements specification and to get some code together (bottom-up). Perhaps provide some form of SHTTP with the common library implementation. NCD, NCSA, Compuserve, Prodigy, OpenMarket and MCI said they might be able to work on it. Since the meeting, Cybercash has also joined the effort.

John Klensin wants a user/consumer to be involved in this process (in addition to the developers).

W3C Intellectual Property Rights

Jerry Waldbaum asked if the code that is written would be submitted with a certificate of originality? This is to avoid problems that might arise because of royalty issues. One possibility is to have the member companies draft some agreement so that if a company donates code, they would receive a certificate of originality or something. He is willing to get IBM attornies working with MIT attornies to come up with some proposal.

Tim Berners-Lee will check the W3 Consortium agreement concerning intellectual property rights. Followup is needed to see if it is useful to augment the W3 Consortium agreement.

Mike Dolan asked whether or not CommerceNet (EIT) would contribute code that they've written if a sample implementation of SHTTP were to be written. Allan Schiffman replied that the code currently uses RSA, so we'd have to deal with that... perhaps by using RSAREF. He commented that there is no algorithm that has a publicly available (even with restrictions) implementation.

Tim Berners-Lee asked if RSA would be in favor? And Ron Rivest replied that RSA is generally in favor of promoting the use of the RSA cryptosystem.

Steve Kent mentioned that 7 years ago, he and the designers of PEM went through a similar situation and RSA was very accommodating. He also wanted to address the issue of modularity and export control; that people should be aware that export control rules from the U.S. standpoint frown upon the export of software packages that don't have crypto in it, but are designed to make the introduction of crypto trivial... (e.g. packages that contain a well defined API). Others have a recent similar datapoint that supports this.

Ron Rivest said that he will talk to people at RSA and PKP about licenses.

Tim Krauskopf asked about how we would go about obtaining code. Allan Schiffman replied that we'd have an implementation that had as few restrictions as possible, based to some extent on his own code. But there would be much less emphasis on things that are considered local matters (like key mgmt and access control) so that the resulting code represents the bare minimum. A line has to be drawn between the protocol engine and the back-end product.

Jerry Waldbaum asked for more technical details for things like key management for example.

Darren New indicated that there should be a review process. He also mentioned that Nathaniel Borenstein, who could not make it to the meeting, supplied him with some written comments and concerns.

Payment Protocols Subgroup

Tim Berners-Lee reiterated the need for a subgroup to look into electronic payment systems. The following volunteered to form a subgroup for exploring payment protocols consists of Tal Rabin(MIT), Darren New(First Virtual), Win Treese(OpenMarket), Don Young(Digital), Juan Garay(IBM) and Rich Petke(Compuserve). Currently, their ranks also include Jason Bluming(NetMarket), Donald Eastlake(Cybercash) and Martin Abadi(Digital).

All subgroups formed today should make timetables for their activities available on the web. Also, Ron Rivest reminded us that all the subgroups created today may need to interact with each other.

The Next Round..

There was a discussion of when and where the next meeting should be, debating between the east and west coasts.

Several times were proposed, but it was difficult to schedule a time based on various conferences today's attendees were likely to attend, so it was decided to target a release of the code and then meet after that.

Allan Schiffman pointed out that with the trajectory we are talking about, the code will not be exportable....

Commodity jurisdictions and CDMF, IBM's weak key cryptosystem based on DES then became the topic of discussion. Allan Schiffman proposed that W3C create a nonstandard cryptosystem, i.e. use CDMF with two values changed so that it isn't CDMF anymore but a "new" algorithm. Then we would only need permission from IBM and the NSA.

A vote was taken to see if this was useful to explore, and about 10 people voted in the affirmative. A proposal was made for IBM, EIT and any other interested parties to meet after this meeting.

There was also a debate about the use of RSAREF.

EIT signed up to design a bumper sticker saying "I support the export of cryptography!". The idea is for everyone's pages to point to it.

Dale Dougherty stressed the need for a deadline to put some schedule together because collaborative work done via email happens very slowly.

The next meeting will not be before April, and is likely to be in June.

Someone mentioned that Lear(SGI) has said that he was writing a requirements document also.

The relationship with the IETF HTTP security working group was discussed. The IETF should be encouraged to run the working group. It will be a further source of input for this group, for example, on present and future RFCs.


Meeting Adjourned

These notes have been taken and prepared by Tal Rabin, Karen MacArthur and Tony Eng.

Last modified: Fri Mar 17 13:37:58 EST 1995