Talk:RSA/Archive 1

From Wikipedia, the free encyclopedia

Archive This is an archive of past discussions. Do not edit the contents of this page. If you wish to start a new discussion or revive an old one, please do so on the current talk page.

I have made a few changes to clear up some ambiguity


As I understand it, efficiently factoring large numbers => RSA broken, but it has not been proven that efficiently factoring large numbers is the only way to break RSA. -- The Anome

I agree with that. Thanks for making the distinction. -- CYD

I'm curious about how much the fact that it's known that two "large prime numbers" are used to construct a product eases the factoring. i.e., since only prime numbers within a certain size range need to be considered as factoring candidates -- hagedis.

From memory of previous discussions elsewhere, not very much. If one factor is very big, then the other is smaller, so the small factor is a soft target. Both numbers approximately the same size (in terms of bit length) represents a harder, not an easier problem. -- The Anome

there are factoring algorithms which specifically target factoring a number which has two factors that are roughly the same size. (and it gets faster the closer together they are.) sorry, no reference off the top of my head.
That's for a different definition of "same size". If P and Q are both 1024 bits (both have a 1 in the most significant bit) and were each chosen randomly (and uniformly, and independently), then they are about the same size, but factoring P*Q is hard. On the other hand, if |P-Q|<100, then they are about the same size, and factoring P*Q is trivial. When cryptographers talk about factors being the same size, they usually mean the bit length, not the difference. -LC

Seems I was wrong about the complexity of factoring :-P Can someone who knows this stuff add information about which complexity class factoring falls into (or is suspected of falling into)? -- CYD

OK, I've added that. --LC

This sentence was added to the article:

Like all public key algorithms, RSA is susceptible to the [man-in-the-middle attack]?.

I'm not sure what the author intended to say here. If Alice and Bob know each other's public keys, then there's no MITM attack. If they don't, then this is a key-distribution problem, which is a problem for all cryptography (not just public key). In what way is public-key crypto more susceptible to MITM than symmetric crypto? --LC

In public key crypto, you have to be really sure that the public key you use does indeed belong to the person you want to send a message too; you can't just look up the public key in an (untrusted) phone book or internet database. I don't think this issue even arises with secret key cryptography. You can't attack secret key cryptography by seeding a public key database with bogus public keys. I also don't think that the term "man-in-the-middle attack" is even used in the context of secret key cryptography. --AxelBoldt

The term is frequently used in both forms of crypto (c.f. [1]). You've noted one case: key distribution. To prevent MITM, Alice must send the key securely to Bob. That's a problem for both public-key and symmetric. For symmetric, it must be secure against both eavesdropping and tampering; for public-key, it only needs to be secure against tampering. People often discuss MITM for stream ciphers, since a MITM might change known bytes if there is no MAC. It's often discussed for block cipher modes, since some modes would allow a MITM to change known blocks in the middle of the message, if there's no MAC. It's discussed in more complex protocols such as Kerberos, which uses only symmetric algorithms to distribute symmetric keys. Such protocols have to be carefully designed to avoid MITM attacks. Most cryptographers would say that MITM attacks are an issue related to protocols above the level of the cipher (e.g. key distribution, block modes, MACs, etc.) and aren't really an issue related to the cipher itself. --LC


about the complexity of factoring (or its decision problem variant):

"It is suspected to be outside of P and NP-Complete and Co-NP-Complete."

This is probably true if you parse that as (outside P) and (outside NP-complete) and (outside Co-NP-complete). I think the text is a bit confusing as it is.

I thought it was clear before, but I've reworded it now. Maybe that's less confusing. --LC
Thanks, much better.

Note that "It is suspected to be" is probably true. A lot of experts suspect it, but certainly not all. My own guess is that it is probably in P, though there are no known algorithms that work in P so far. It's almost certainly not NP-complete, and almost certainly not Co-NP-complete. It has been proven that if it is one of those, then P is the same as NP.

No. If it's in NP-complete, then NP-Complete = Co-NP-Complete. That would be a very surprising result, but it wouldn't necessarily mean that P=NP. --LC
Yes it would. If there is any problem that is both in NP-complete and also in Co-NP, then P=NP. I don't have a citation handy, but i think this is mentioned in Garey and Johnson, with a cite to the original paper.
No. See Garey and Johnson's theorem 7.2. If a problem is both in NP-complete and also in co-NP, then NP=co-NP, which implies NP-complete=co-NP-complete. They don't say it would imply P=NP. --LC
Ok, i'll have to dig my copy out of the mess and look at this again. It's been a couple of years. Until i figure it out, or someone knowlegeable finds a cite, please leave the last few paragraphs here as a reminder.

The best currently known algorithm (assuming two prime factors of approx equal size, composite is bigger than 450 bits and has no particular special form) is the General Number Field sieve, which is asymptotically much slower than polynomial but much faster than exponential.

Yes. In fact I corrected an error on that point earlier today, and mentioned the exact bound here. --LC

Some of the decision problem variants of factoring belong to a complexity class called lamba-complete.

I'm not familiar with lambda-complete. It would be great if you could write an article on it. If you do, go ahead and add a link to it in the table in computational complexity theory. --LC

I wonder if it is good advice to choose e small. That might induce people to simply choose the smallest e possible, in that way giving away some information about φ(N) which could conceivably be exploited in cracking. --AxelBoldt

In practice, people often use e=3 or e=65537, because they allow faster encryption, and it's generally considered safe. The latter is sometimes preferred because there is an attack if you encrypt the same message e different times to different people. However, in a typical hybrid system the "message" is actually a random symmetric key, so that isn't a concern. --LC


As/when Shor's algorithm is practical, where will public-key encryption go then? If discrete log will also fall to quantum computers, what is left? The Anome

Quantum cryptography will become practical way before quantum computers. -- CYD


How should we name the attacks on (academic) RSA, basically we have,

  • Attacks when a small private exponent is used (Weiner attack)
  • Attacks when a small public exponent is used (both multiple message attack and lattice basis reduction attacks)
  • Attacks when modulus is shared
  • Blinding attacks
  • Partial key knowledge attack
  • Implementation attacks
  • Others ?

--Imran 01:22, 16 Mar 2004 (UTC)

Contents

Summary of RSA

"In most basic terms, RSA encryption takes a message (denoted m), and encrypts it with some function (denoted E). So the sender of the encrypted message sends over the function applied to m: E(m). The recepient will have in their posession some function that will decode the encoded function (denoted D). The function D will work by applying the inverse of E so the original message is returned. Thus: D(E(m)) = m."

Someone (User:206.47.184.148) added this to the lead section. I think it's a good idea to include a brief overview of RSA in the lead section, since the description later on is somewhat complex. However, I think this above description is too generic; it could apply to any cipher, even a symmetric algorithm. Can we come up with a more specific description, which is also brief (one paragraph) and not overly technical? — Matt 08:27, 11 Jun 2004 (UTC)

17-July-2004-1:41pm

I made the following change:

It was stated the encryption procedure was c \equiv n^e (mod N). This is not actually a procedure, it is just a statement that an equivalence holds modulo N. Encryption is done by computing n^e \mod N; note there are no parentheses here. I made a similar change for the decryption procedure.

timing attack

The last two sentences in the section on the timing attack are somewhat misleading. "Blinding" is a method where instead of, say, signing the value H, you sign the value r H (or r eH), for a random value r. This value r is chosen each time an exponentiation is to be done and is kept secret. This prevents adversaries from knowing the input to the exponentiation routine -- this information is required for the timing attack. So, "blinding" is a form of randomization; it does not make the exponentiation routine run in constant time.

Copyvio

Just from reading, the content seems very similar in feel and wording to the book Practical Cryptography, Neils F. & Bruce S., Wiley Publishing Inc, 2003 (Copyright to the authors). If anyone wants to check it, please do. Falcon 02:35, 31 Aug 2004 (UTC)

I'll try and look at it. Can you identify any specific passages / sentences? — Matt 03:02, 31 Aug 2004 (UTC)