A recent paper, “Fast Factoring Integers by SVP Algorithms” by Claus P. Schnorr, claims significant improvements in factoring that “*destroys the RSA cryptosystem*”. If true, it would be practical to demonstrate on well known RSA factoring challenges.

**No such demonstration has been made. **Without this, assessing the correctness of the paper will have to wait for reviewers to wade through the details and give their feedback.

This paper drew the attention of mny cryptographers because Schnorr, notable for Schnorr signatures, is an accomplished cryptographer who has worked on factoring problems for at least a decade. There have been significant improvements in factoring over the last 20 years, so a big new result from a known researcher is at least plausible.

Initial misspellings and version inconsistencies led to speculation that the submission was a prank. However, the provenance of the paper has been confirmed: it is indeed Schnorr.

Schnorr’s paper claims to factor** 400-bit moduli in 4.2·10⁹ **operations and **800-bit moduli in 8.4·10¹⁰ **operations. The 800-bit claims would be **36 bits of work**.

**That would be demonstrable on commodity hardware.**

For comparison, the two most recent factoring records using CADO-NFS are:

The Crypto 2020 paper “Comparing the Difficulty of Factorization and Discrete Logarithm: a 240-digit Experiment” covers some of these results.

According to the claims in Schnorr’s paper, it should be practical to set significant new factoring records. There is a convenient 862-bit RSA challenge that has not been factored yet. Posting its factors, as done for the CADO-NFS team’s records, would lend credence to Schnorr’s paper and encourage more review of the methodology.

## Comments

An example, by hand, from the paper author, where he is using this algorithm to factor a number would be great. Even a small number that's easy to factor by brute force would be enough to actually proof that his claims are true. We'll do code implementation and run it against RSA challenge numbers, and see if this is a prank or the real deal.

For those of us less familiar with cryptography and RSA in general: what are the implications if RSA is broken? What are the mitigations that would need to occur in its place?

Reminiscent of Craig Wright's claim to be Satoshi.

It doesn't matter what you claim with words if you can't back it up with cryptographic evidence.

Shut up and prove you've done (or can do) the work.

Are you really comparing a con artist with one of the most famous cryptographers?

Dear lord no. I can see how it might come across like that.

More drawing attention to the wider theme that we generally should not take people at their word when we have the option to demand proof of work that can't be faked or mistaken.

Don't trust. Verify.

These are not trivial algorithms to implement, and the other factorization records require months of work from implementation experts. It's not an easy task, and theory work stands independently of implementation effort.

The claimed number of operations is low enough that demonstrating the algorithm in practice does not require a highly optimized implementation.

Still, if this new algorithm could threaten 1024 bit RSA using 10,000 computers for 10,000 days after a 10,000x speed up from optimisation, it should be able to solve the RSA-896 factoring challenge with a single computer for a single day without optimisation, shouldn't it?

After all, 2^896 is 38 orders of magnitude smaller than 2^1024.

What does "36 bits of work" mean, sorry?

My naive assumption would be, takes 2^36 cpu operations

If so, 2^36 ~ 7 x 10^10, so a few seconds on GHz processors.

2^36 arithmetic operations is what is claimed. That's not quite the same as CPU operations, because we only have 64-bit CPUs with up to 512-bit vector instructions, but we're talking about factoring 800-bit numbers. So we need to allow for several CPU instructions to implement each of the required arithmetic operations.

Yeah I would be great if they could translate that into core-years to match the references they listed

I'm guessing it's a shorthand for the order of units of work. log2(8.4E10) = 36.3 bits of operations

I wonder if Schnorr is going senile like Atiyah.

If someone wasn’t a cryptographer, but does occasional security tasks at work, what is the takeaway? RSA needs to be 4096 or higher now, or that similar techniques in the future might make RSA a bad choice altogether?

Don’t worry - yet. This is either a nothingburger, or it’s going to be a nightmare for

everyone, all at once(ever dealt with web PKI? you will get a chance if this is true)But there’s no real current takeaway until we know if this approach works, and if so how extensible it is to RSA, especially 2048 bit RSA.

There are plenty of techniques in the past that make RSA a bad choice altogether.

If you are going with it anyway, yeah, 4k bits is a safe choice for making it reasonably secure right now (2k being a bare minimum), but remember, attacks always get better, never worse, and RSA has a fair share of possible attacks.

Devil's advocate: Posting the factors requires implementation work, then optimization, then a manageable but possibly still not trivial amount of resources and time - and likely a lot of trial and error. It is perfectly conceivable that a paper would be published before the implementation is actually better than a slower but heavily optimized approach. (I don't even try to understand the paper, but I've seen a mention that it's a storage tradeoff, which may make it a very different kind of optimization problem.)

Do we know that the paper is definitely from Schnorr? (Edit: The article claims its provenance is confirmed). The "destroys the RSA cryptosystem" claim is now part of the paper. While anyone can make mistakes, I would expect such claims to be at least somewhat carefully checked before releasing them.

Either way, I expect that we'll see either a retraction/correction or factors within weeks.

I am no cryptographer. I did implement, from the paper, Yao's "socialist millionaire" cryptographic protocol but... It was only a few lines of code and a very simple (to understand, not to come up with) paper.

Now I just looked at that Schnorr paper and, well, I can tell you that I'm not going to be the one implementing it : (

I'm skeptical. The paper is too tough for me to digest without spending days/weeks/lifetimes focusing on it (and there are many who can do it much faster obviously). But I think that if RSA is materially broken, we'll know it from movements in the ground (eg, sudden mysterious forged signatures) by the time a paper is published.

I don't think that such a secret can be kept for more than a few minutes with immediately proceeding to runtime weaponization.

Is prime factorisation used in SHA256 ? Would I be able to solo mine from my CPU again ?

No. Prime factorization is used for public key cryptography, not hashing.

No, to both questions. Implementing SHA256 is actually quite easy, no more than 50 lines of code. My current implementation that I use for my personal use is under 100 lines of code including variable declarations (not just executable lines of code).

https://en.wikipedia.org/wiki/SHA-2

Crypto noob question: Wouldn't it be prudent to switch to something like ECDSA(heardsay that it is stronger) if there was even a hint that it was possible?

If a major government got wind that such work was going on, wouldn't it be prudent to publish before you are disappeared? I assume high-profile crypto research people are spied on.

I can imagine a certain pure-theorist mindset being confident enough in their reasoning, but not yet their coding, to report this first. Or, strategically holding definitive proof back as a hammer to deploy once the doubters reveal themselves.

Why not let others do the rote reduction-to-practice?

Why not create an example where your theory was correct, & your reputation was on the line, that took a little while to resolve – but when it does, it does so definitively in your favor, so you are more trusted in future pre-definitive-verification pronouncements?

(I don't know enough about Schnorr-the-person to know if this fits his personality, but I can imagine such personalities.)

That's really all there is to it. Pudding, proof, etc.

> the provenance of the paper has been confirmed: it is indeed Schnorr.

What I read is that someone contacted Schnorr

over emailto get this confirmation.I’m not saying the confirmation is wrong. And I’m not saying email cannot convey information.

Well, it's definitely suspect now that RSA is broken.

Beautifully done.

So somebody cracked RSA and took over gmail.com domain just to fake him being Schnorr.

"Email cannot convey information"? Baloney. It does all the time.

You seem to mean something different from what your words say...

“I’m not saying...”

see?

Factors or gtfo

Get the factors out

This take is rather naive. Those RSA factoring records were done by a large international team of researchers, using well established algorithms and decades of work on implementing those methods as fast as possible.

The blog post says the paper mentions 8.4e10 operations for factoring, but I can't find that number in the paper anywhere. The post then states: "The 800-bit claims would be 36 bits of work." I don't know what that means.

[edit]: the numbers are in the new version (https://eprint.iacr.org/2021/232). I was looking at the old version uploaded yesterday.

> The post states that 800-bit claims would be 36 bits of work. I don't know what that means.

From the article: "Schnorr’s paper claims to factor ... 800-bit moduli in 8.4·10¹⁰ operations"

2^36 ~= 8.4·10¹⁰, so I guess "36 bits of work" means 2^36 operations. Analogous to how a password with 36 bits of entropy would require 2^36 guesses. My first time encountering the phrase "bits of work" as well, though.

2^36 "operations" can still take a lot of time if each operation is multiplying two giant numbers, unless the meaning of "operation" is somehow normalized to mean e.g. 64 bit integer operations.

It didn't take long for custom ASICs for mining bitcoin to emerge. It wouldn't take long for custom ASICs to do these kinds of operations a lot faster than on a general purpose CPU to emerge.

> 2^36 "operations" can still take a lot of time if each operation is multiplying two giant number

It took me 3.3 years of actual computation time to do about 2^46 multiplication+modulo of two 2048 bit numbers on a Core i7. 2^36 of 2048 bit numbers should be doable in a day on an eight years old CPU.

P.S: that was on a single core, for the problem I solved was explicitly created as to not be parallelizable.

Supposing the paper does describe a more efficient factorisation algorithm, that does not imply that factoring a 800 bit prime (like the author of this article suggests) would be cheap.

factoring a 800 bit prime is definitely cheap. ;-)

In fact, I'll do it right now for you for only one dollar per bit.

It's in the abstract:

>Our accelerated strongprimal-dual reduction of [GN08] factors integers N≈2^400 and N≈2^800 by 4.2·10^9 and 8.4·10^10 arithmetic operations.

Increasing the length by a factor of 2^400 only increased the amount of work by a factor of 20? Staggering, if true in general.

Actually, you're only increasing the length of the number by a factor of 2, since 2^400 is a 400-bit number.

If true, it's still leaps and bounds ahead of anything we have today, though.

Fair point. "Increasing the key [not the length of the key] by a factor of 2^400".

Yeah, that's what got me.

Doubling the length of the key only requires a single order of magnitude more work?. If that turns out to be true, I'm going to need to revise my beliefs about how the universe works. In particular, information theory and thermodynamics, because multiplying two numbers together doesn't preserve information about what the factors were. Or at least pretty much everyone thought so. (Caveat: if the values of primes turn out to have a predictable pattern, that could provide the needed information. However, that would mean that the Riemann Hypothesis is false, and that'd be an even more astounding result.)Thing is, RSA keys are rather 'sparse' because they are the product of two primes. There aren't that many primes, so there aren't that many numbers that only have two (proper) divisors.

Hence, if you look at the strength of the currently best-known attack on RSA keys, you see that the key strength grows quite slowly as the keys get larger. This is purely from how sparse prime numbers are. From [1] which quotes NIST in 2015 we see (both collumns are in bits):

> Because multiplying two numbers together doesn't preserve information about what the factors were. Or at least pretty much everyone thought so.Technically the information is still there, it was just thought to be very hard to extract. This paper shows an easier way to extract it.

[1] https://crypto.stackexchange.com/questions/8687/security-str...

OK, here is a brief overview for people:

To factor a number N (assumed to essentially be the product of two very large primes), find a 'short' lattice vector [0] using LLL [1] (and BKZ reduction? [2]) that finds many relations of the form:

where p,q are small primes.Numbers that have all their factors less than some prime, B, are said to be "B-smooth". In the above, both (u_i) and (u_i - v_i * N) are p_{i,n-1}-smooth and q_{i,n-1}-smooth, respectively.

Construct many u_i and (u_i - v_i * N), so much so that you can create a product of primes, r_i, of the form:

where each b_i are integers.Since all exponents (2

b_i) are even, we have the potential to find the square root of 1 which has the potential to resolve to two different numbers since N is composite. One of those is the product of r_i^{b_i} and the other is -1. Since y^2 = 1 mod N, we get (y-1)(y+1) = 0 mod N. If (y-1) or (y+1) are not 0, then then must share a factor of N and we've successfully factored.The trick is, of course, finding the smooth numbers. To do this, a lattice basis is made such that you find a short integer relation of the form

where ~= means "approximately equal to".u is chosen as the product of primes of all a_i > 0 and v is chosen to be the product of all primes where a_i < 0. The hope is that (u - v*N) is also p_{n-1}-smooth, which, as far as I understand, most of the math in the paper is trying to justify.

The main innovation here, as far as I can tell, is that Schnorr is fiddling with the 'weighting' of the main diagonal when constructing the lattice basis. I interpret this as basically trying to randomize the initial lattice basis so that the chances of getting a different integer relation (for eventual construction of u,v) is more probable.

I've been confused about this for over a decade as variants of this algorithm, and Schnorr's work in general, have been well published. For example, there's a paper from 2010 on "A Note on Integer Factorization Using Lattices" by Antonio Vera which discusses Schnorr's [3] construction.

Is Schnorr trying to shout louder so people will listen or is there something else fundamentally flawed with this type of algorithm?

Just a word of warning, LLL solves polynomial factorization in polynomial time (given a polynomial with integer coefficients, find it's factor polynomials also with integer coefficients) [4] and has been used to break other (now very old) cryptosystems [5]. If there's a candidate algorithm to solve integer factoring, lattice reduction (LLL, PSLQ, etc.) are it.

I know of fplll that's a stand alone (FOSS) implementation of LLL and some extensions (BKZ, etc.) [6].

[0] https://en.wikipedia.org/wiki/Lattice_reduction

[1] https://en.wikipedia.org/wiki/Lenstra%E2%80%93Lenstra%E2%80%...

[2] https://www.newton.ac.uk/files/seminar/20140509093009501-202...

[3] https://arxiv.org/pdf/1003.5461.pdf

[4] https://en.wikipedia.org/wiki/Factorization_of_polynomials#F...

[5] https://web.eecs.umich.edu/~cpeikert/lic13/lec05.pdf

[6] https://github.com/fplll/fplll

Thanks for summarizing, and talking about what's novel here.

In the paper Schnorr suggests that this algorithm factors 800-bit integers in ~10^11 operations [36 bits], whereas the Number Field Sieve uses ~10^23 [76 bits]. Does that 76-bit figure represent the current state of the art, more or less?

Also, since the paper talks only in terms of specific sizes of integers, I assume there's no claimed asymptotic speedup over existing methods?

LLL is effectively a polynomial time algorithm, though the exponent is large (it's like O(n^5) or O(n^6), also depending on bit length, etc.), so it might fall into the 'galactic algorithm' [0] territory.

The runtime depends on the frequency of primes, which is how you know you can run the algorithm and have a good chance of finding a "B-smooth" number. So that frequency might decrease too fast as to make it not a polynomial time or quasi-polynomial time algorithm.

In terms of my own opinion, in general practical large exponent runtime algorithms have a way of becoming increasingly more efficient so this isn't usually a big blocking point, especially not for long. For example the interior point method eventually lent itself to faster implementations. In terms of frequency of primes, I suspect this is also not a big slowdown factor.

Anyway, like I said, I've been confused about this point for a long time. It seems like this method is providing effectively a polynomial time algorithm for integer factoring. I read this paper and others by Schnorr as "this is 'efficient' in the theoretical sense but the polynomial exponent is so large as to make it computationally infeasible right now" but then I don't know why this hasn't been bigger news. Maybe because the algorithm has a randomness component to it?

I don't know. If anyone is wiser than me, I would appreciate some enlightenment.

[0] https://en.wikipedia.org/wiki/Galactic_algorithm

My impression was the big optimization was the success rate threshold to guide the enumeration processing more quickly to the actually correct vector, rather than wasting cycles on smaller iterative improvements, but I'm still digesting the paper though, and my linear algebra intuition is so rusty, I may need a tetanus shot from using it, and what little number theory I've got doesn't amount to enough to get a shoe shine in 1930.

OK, a little more context. Sophie Schmieg has a twitter thread discussing some of these issues:

https://twitter.com/SchmiegSophie/status/1367197172664389635

They mostly mirror the article this post is under, namely "show me the factors". Schnorr claims spectacular run times but it isn't clear he provides actual data from runs he's done. Schnorr also doesn't discuss complexity considerations in the detail they would deserve while focusing on basic details that I suppose wouldn't show up in a paper like this normally.

This was heavily discussed yesterday. (Edit: this next bit was out of date:) It seems the provenance of the paper and the 'destroy' claim are unclear.

“This destroys the RSA cryptosystem”- https://news.ycombinator.com/item?id=26321962 - March 2021 (140 comments)Nah it’s confirmed as from Schnorr.

He even reiterated his belief it leaves RSA rekt.

All that’s left is the veracity of the attack.

Ok thanks! I was out of date.

Isn't it typical to release the paper first, for peer vetting, ahead of any actual working proof?

It seems like the only reason for the

"put up or shut up"reactions is that"destroys RSA"comment in the submitted abstract...which isn't in the actual paper.> is that "destroys RSA" comment in the submitted abstract...which isn't in the actual paper

I think it is - https://eprint.iacr.org/2021/232.pdf

Ah, I see. It's been removed in a newer revision of the paper. https://www.math.uni-frankfurt.de/~dmst/teaching/WS2019/SVP9...

The newer 12-page version on the preprint server has a PDF creation date of 3/3/2021, 11:32:56 AM, created with pdfeTeX-1.30.4.

https://eprint.iacr.org/eprint-bin/getfile.pl?entry=2021/232...

The previous (reportedly wrongly uploaded) version is from 12/5/2019, 9:10:13 AM created with pdfeTeX-1.30.4.

https://eprint.iacr.org/eprint-bin/getfile.pl?entry=2021/232...

The university website version is from 3/5/2020, 12:00:19 PM created with pdfTeX-1.40.15.

These dates & times are MM/DD/YYYY & CET.

A co-editor of the Cryptology ePrint Archive confirmed the submission on twitter:

https://twitter.com/Leptan/status/1367103240228261894

That times out for me, I guess www.math.uni-frankfurt.de is now getting more attention than usual ;-)

Here is a version in the Google cache, it has the date of tomorrow on it "work in progress 04.03.2020" and no longer contains the "destroys RSA" sentence: https://webcache.googleusercontent.com/search?q=cache:E0L-S3...

Other way around I think. Your link is an old version of the paper. The one on eprint was just updated today with a version of the paper that adds the "destroys RSA" line and removes the "work in progress" line (put it in the wayback machine to see the version that was there yesterday without the claim of destroying RSA)

The factors could be included in the manuscript as an example..

The sum-of-three cubes announcement was tweeted pretty easily.

https://twitter.com/robinhouston/status/1169877007045296128

Its easier to drum up support for your paper when you have a quick way to prove to the community of mathematicians that your results are golden.

EDIT: The original webpage: http://math.mit.edu/~drew/sumsofcubes.html

As you can see, the sum-of-cubes announcements are very terse. Ultimately pointing to the following link: https://share.cocalc.com/share/900eec7e-0710-4e2f-a03a-dba01...

That kind of website / tweet is a "drop the mic" moment. It really makes people pay attention.

That's not how science works. Yes, if your algorithm is simple enough and you can create an implementation, then it is good that you produce a working version. But it may be more complicated to implement the algorithm than writing a paper. This doesn't mean that the implementation is impossible.

> But it may be more complicated to implement the algorithm than writing a paper

Then why would I trust it? You don't need to write code, you need to write an example

As Linus Torvalds says: talk is cheap, show me the code

Academia is full of "paper scientists" that put out papers but produce nothing of value.

They are also full of postgraduate students as well that would be more than willing to work together and put a proof-of-concept code with the paper.

> That's not how science works.This isn't science, it's math. As the article mentions, there is an 862-bit RSA challenge that hasn't been factored yet. Factoring it should be possible on commodity hardware if the claims in the paper are true. So why not just do it? The test of success is simple: either you win the challenge or you don't.

There are example factorizations around page 11 if I'm reading this correctly. Haven't run them through yet because I'm refreshing my atrophied linear algebra knowledge and walking through some of the source papers, so some work is in there.

For a skilled researcher it may be an order of magnitude easier to write the whitepapar than to code the implementation.

That's exactly how maths works, specifically in the cases where where the claim is easily backed by a demonstration. A famous example:

https://en.wikipedia.org/wiki/Frank_Nelson_Cole

If you 'destroyed RSA' through better factorization, all you have to do is start publishing factors of RSA challenge numbers.

Matthew Green has a fun thread about other ways to approach this along with an interesting "real talk about factoring" sidebar by Nadia Heninger:

https://twitter.com/matthew_d_green/status/13669500931784990...

So

thisis why I’ve suddenly been getting new interaction on that old tweet! I’m glad I saw this comment, because I was quite confused.The whole paper is linked, https://eprint.iacr.org/2021/232.pdf

If you really think your paper destroys RSA, I think there are ethical questions the authors must decide before publishing it.

In particular, I think the right process would be:

1. Give some brief description of the result (eg factoring numbers in O(...)), and some proof (eg a factorisation of the next rsa semiprime, possibly more) that convinces people that your claims are true

2. Wait a while for people to have the chance to not be burned

3. Publish the paper

Instead, the authors seem to be going for:

1. Publish the paper with a provocative abstract.

2. Wait to see who implements the algorithm first.

It doesn’t seem the best idea to me, but what do I know?

The pursuit of knowledge should not be subject to any

annoyingeffects said knowledge may have on people hedged against such knowledge becoming available. That’s an anti-liberal recipe for a rather dark society. I don't see the point in tone policing people generating knowledge just to remind them that sometimes knowledge is inconvenient."tone policing"? What do you think that phrase means? Who was talking about "tone"?

Do you believe in "responsible disclosure" [1] of security vulnerabilities? How does your stated philosophy apply or not apply to ethics around disclosure of discovered software security vulnerabilities? Is that different?

[1] https://cheatsheetseries.owasp.org/cheatsheets/Vulnerability...

Technically, the poster the poster you replied to could be said to have been tone policing with the assertion of releasing a deliberately "provocative" abstract.

I mean, I get it, it's not the most straightforward mental leap, but I can understand the sentiment.

And as far as responsible disclosure goes, no, the responsible thing to do is to notify everyone at once. Keep in mind if it is right, this means that nation state actors have just been equipped with a roadmap to potentially cracking a lot of banked ciphertext, probably faster than anyone else.

You don't sit on that kind of thing, even if it does mean some corporate actors get burned.

If the only thing saving your rear was an RSA key... Take notice: the clock may have just been significantly advanced. Be you nation-state, corp, or someone who'd just prefer to remain in the shadows.

Suffer thee not information asymmetry to live lest you carry the blood of those you sacrifice on the altar of your limited disclosure. It also hedges against you getting disappeared and suppressing whatever other people you shared it with that remain to keep something so relied upon from being realistically entertained.

I mean, cmon, how long has everyone been joking they'd hate to be the person who discovered how to break RSA, because we all know it would lead t

<SIGNAL_LOST>

Sorry that is such a bullshit statement.

Can you explain?

Pursuit of knowledge can be noble, but it can also be horrific. Granted, this isn't horrific, but if RSA is literally "destroyed" then it has potential to do harm, even if we stop using it immediately (which is expensive by itself). Ethics governs good science, and math is no exception

Unlike a zero day, there still remain a number of important factors (haha) to actually break a large number. But crypto systems are special because they rely on trust. The mere sign of weakness is sufficient to kill that trust. A sufficiently resourced state actor may even be ahead.. we don’t know

You're going to be killed or kidnapped between steps 1 and 3.

Yep. A paper has to substantiate any claims made in the abstract, and this failure already undermines my confidence that the result is correct.

This is math. The working proof _is_ the paper. It just takes a long time to refute if there’s a subtle error. “Putting up” ISN’T proof but it will sure get a lot of important people to drop what they’re doing and check the paper faster.

I haven't looked into the details of the speed improvements... it's often the case that big-O improvements in complex problems come with bigger constant factors. It's entirely reasonable that this new algorithm is slower for factoring 384-bit numbers but faster for 4096-bit numbers.

I wouldn't be surprised if a demonstration that pushes the current publicly known record for largest factored RSA modulus costs hundreds of thousands or millions of dollars even with this new algorithm, and the algorithm is also slower than other methods for, say 384-bit RSA.

It may be difficult even for Peter Schnorr to get that kind of budget for a demonstration before his paper gets traction.

This is math claiming an algorithm that is faster.

For an algorithm, asking for results is not that weird. It certainly fits within the purview of a paper. Moreover, it would be really strong evidence for the claims made in the paper. It is much easier to check than the proofs.

I'd agree with that if he had merely tweeted about RSA. However, he put the claim to "destroy RSA" in the

abstract. Claims made in the abstract do need to be substantiated, but RSA isn't discussed in the main text at all."Putting up" isn't proof that an RSA decryption algorithm exists, but it

isproof that someone has found a way to reverse RSA either way.The latter is much more important.

I dunno, isn't a demonstration of a few factors proof that

someone somewherehas broken RSA, or has a machine much more powerful than we'd expect to exist? It's not proof that any of the math in the paper is correct, but it doesn't have to be. Even if there's a subtle error in the proof-as-written it doesn't matter if the implementation actuallyworks.All that is true.

But if the author of the paper (or even someone else with a credible reputation) was to public factors and say they used this method then it's useful evidence.

How broken does this claim RSA is? SHA-1 was known to be broken for a long time before actually pulling off a collision was performed for example.[1]

[1]https://en.wikipedia.org/wiki/SHA-1#Attacks

An attack is anything that makes it take less than a brute force effort (of 2^80 operations). A 2^63 effort is really expensive in 2007. But by 2015 computation was maybe 2^5 times cheaper and there was a 2^5 cheaper attack.

I believe that this breakthrough could be quite a bit bigger because it's changing the costs from exponential to polynomial and so speedup is likely a much bigger change.

You probably understand how serious this is, so many people are going to become very emotional as this strikes at the very foundations of the Internet and digital security as we know it. The reactions I've seen so far do seem very emotional and this will only become much, much worse if there's a PoC which is produced.

What does emotion have to do with it? For many years, every time someone claims to have broken RSA the response is the same: here is a test case you (the alleged RSA breaker) should be able to crack if your claims are correct. Put up or shut up. That seems like a fair, just-the-facts response.

Internet and digital security? This strikes at the very foundations of

number theory.Ehhhhhhh. I'm not sure I'm aware of anything in number theory that would substantially change if a fast factoring algorithm were discovered,

outsideof its applications to cryptography. It'd be very likely that a fast factoring algorithm exposed some unusual structure about the natural numbers, I guess?And the article says it would take around a thousand CPU core years to break 800-bit RSA. Maybe Schnorr needs to publish first to get the budget for that.

That can be solved within hours on a GPU cluster... Looking at a single core is usually not a great idea when you have billions of cores available.

Only if your algorithm is conducive to parallelism based speedup. Amdahl's law still applies.

The paper mentions some gains being possible through parallelism with one of the algorithms their work is based on, but also mentioned most prior art is not effectively parallelizable across discrete machines.

> the article says it would take around a thousand CPU core years to break 800-bit RSAThat's for previous algorithms, not the one described in the paper.

He doesn't need to demonstrate it with an 800-bit number. Given the claims made about how much faster this is, the effect should show up even in fairly small numbers.

For something of this magnitude, I think the expected behavior would be to delay the release of the paper until the ecosystem had time to adapt. To prove the paper is valid (and to assert precedence) you would offer to factor several "nothing up my sleeve" numbers -- like sha512("schnorr1") + sha512("schnorr2").

As it is, if the algorithm presented is valid then this potentially compromises currently operating systems.

I haven't read the paper or anything, but if the expected adaptation is to drop support for RSA; no reasonable amount of time will make it a seamless transition.

There are so many devices and pieces of software that are stuck on RSA, a headsup of say 5 years would still result in a clossal mess; may as well have the mess now.

Note that (despite the ill-considered claim about RSA in the abstract) this isn't security research. It's a maths paper in a very important area of number theory. It doesn't disclose a patchable flaw in RSA---it claims that RSA is based on an unproven conjecture that turns out to be false. That RSA is vulnerable to fast prime factoring has been known since it was first implemented. That's not fixable, so delay serves little purpose, especially with a (if it turns out to be true) seminal result in number theory.

I do not agree that so-called “responsible disclosure” would or should be the expected behavior. I do understand how someone accustomed to corporate bug bounties and private security mailing lists may think so, though. Full disclosure is a perfectly reasonable strategy especially when we’re operating in the academic realm. Industry always takes years to catch up to academia anyway.

If this paper allows to produce a working RSA cracker in a month, much of high-value IT infrastructure is under imminent threat.

Yes, you can replace your SSH keys with elliptic ones, and maybe adjust your TLS accepted algorithms. Even this is not always easy or cheap.

But other things that may rely on RSA (or triple RSA) may have trouble upgrading fast, and upgrading them at all is going to cost a lot.

Yeah, but also the industry knows for DECADES that RSA will likely be broken eventually. Not having reacted yet is not the researcher's fault.

We know that Quantum computers can break it too, yet nobody really acts on it with any urgency. If suddenly there is a breakthrough and we can reach this state within a year, then there will be no time to adapt as well.

It all boils down to the general corporate attitude of not fixing catastrophic problems without precedence. We see the same with climate change and once it hits hard, it will be too late to adapt.

Not only quantum computers. Every cryptographic method relies on fact that there is no easy way to decide NP problems. And we don't really know whether P!=NP (in fact, according to polls, about a quarter of researchers disagree), or, in case there actually is a polynomial algorithm, its exponent will make it untractable in practice.

So there might be DOOMSDAY in the future, where all cryptography will cease to work, because somebody just figures a way to decide NP problems quickly enough.

>Yeah, but also the industry knows for DECADES that RSA will likely be broken eventually.

Interesting. Reference?

It's generally accepted for almost all crypto protocols that it's going to be broken eventually (a few rare cases are provably secure, one-time pads and not much else).

If nothing else, quantum computers should break RSA in particular (the algorithm is already known and just waiting for hardware) and the writing has been on the wall there for a long time.

> It's generally accepted for almost all crypto protocols that it's going to be broken eventually

Just like it was generally accepted that god exists. Those claims are of similar strength.

> If nothing else, quantum computers should break RSA in particular

Quantum computers with enough qubits do not exist and it's absolutely not obvious whether they will exist at all.

I asked a few years ago(I don't remember the specifics) what was the recommended post-quantum crypto, and basically the answer was that there wasn't any vetted and proven post-quantum crypto. So I don't think you can blame the industry here.

There are some promising experiments. All of them are

much worsethan the algorithms we use today (including RSA) in practical terms, except that Shor's algorithm doesn't apply to them, and so they specifically fulfil the criterion of post-quantum public key crypto.[For symmetric encryption quantum computers would only matter if they were pretty fast/cheap

andwe didn't have ready-to-go 256-bit symmetric crypto, but we do]OpenSSH actually has an implementation of a reasonable contender for SSH. Google have experimented (in Chrome builds) with some of these contenders for TLS too. What you would likely want to do - since by definition these are relatively untried algorithms and so might be unsafe against adversaries with an existing not-at-all-quantum computer - is combine with an existing algorithm, most likely elliptic curve based, but RSA would be possible, under a "swiss cheese" model where you're only dead if an adversary penetrates all the layers.

But like I said,

much worse. Given that there aren't any suitably large quantum computers (and it's always possible that we'll eventually figure out we just can't build usefully large quantum computers, just like we eventually found out that while youcantravel faster than sound youcan'ttravel faster than light) it would make no sense to deploy this today, even though it continues to make sense to do research Just In Case.Could you expand on "much worse?" Are they slower, are they harder to implement?

slower

larger messages

larger public keys

larger private keys

Just to expand the other answer you got.

Slower, in a different complexity class from the classical ones (AKA O(n^2) instead of O(n), but I don't know what exact complexities the top candidates have right now, as a function of key length).

Larger messages, in that a signature requires 100's or 10's of Mb instead of a few kb.

Larger public keys, as again 100's of Mb instead of a few kb.

Larger private keys, as in a few to 10's of Mb instead of 1/2 to a few kb.

But I guess the most relevant "much worse" is that people have much less confidence that those algorithms are secure, and they are complex enough to hide huge traps that make implement RSA look like a walk in the park.

Wouldn't surprise me if there are classified quantum computers crunching numbers as we speak tbh.

I would be very surprised. That would involve keeping some huge leaps in basic research under wraps.

> triple RSA

Yeah, no. 3DES (Triple DES) is/ was a thing, but Triple RSA is not.

That was

thejoke.> If this paper allows to produce a working RSA cracker in a month

The blog post claims that people have been trying to reproduce his results for two years, though.

If you find something that can break all cryptography in the world, then I think your best option (even for your personal security) is to release everything publicly.

A one time pad can not be broken.

I always enjoy the reaction when I tell students that we've had provably unbreakable encryption since the 19th century.

And yet any time someone proposes to actually implement it, the response is always negative: "don't roll your own encryption" or "sharing and storing the pads is infeasible" or some such.

Seems like we should have figured out a way by now to use one time pad encyption by default for critical paths, even if that requires new industries to distribute pads and guarantee their security.

Yeah. I take up the key exchange problem (and Diffie-Hellman) as well as difficulty of achieving true randomness.

Quantum computing may eventually force us to move to OTPs. Of course it's going to be a pretty long time before we'll have quantum computers that can work across an 800-bit field.

It won‘t. Symmetric ciphers are holding up just fine against quantum computers.

Key exchanges would be annoying, but they are even more so for one-time pads.

But it will never come to even that: There are plenty quantum-safe asymmetric cryptosystems around.

From a personal safety point of view?

RSA has published the same set of "come and break me" keys for years, decades I think. Breaking those public (in both senses!) keys would be a good start.

The first thing this will be used for is stealing Bitcoin and other cryptocurrency I predict. So watch out for your wallets.

Bitcoin wallets don't use RSA, but ECDSA.

Once people figure out the math to break bitcoin then they can transfer bitcoin from any address.

I don't know why people don't bring this up more often. It will likely happen long before quantum computers make it possible.

Because, simply, it's not true. I'm curious though what attack vector you're thinking of, especially one that's not related to quantum computers. Are you worried that the ECDSA public-key cryptosystem employed by Bitcoin will be broken, such that the private keys could somehow get derived easily from the public ones? If so, that still wouldn't let an attacker "transfer bitcoin from any address," since the addresses themselves are

hashesof the public keys. So people would have to stop re-using addresses to receive bitcoin multiple times, since once an address has been the sender in a transaction, and its public key revealed, it would become vulnerable.This was my exact argument: https://news.ycombinator.com/item?id=26323951

Should be trivial to show a working proof on a smaller-than-usual RSA number if "this

reallydestroys RSA".I know a lot of programming languages, but I have never wrapped my head around math notation.

Question for someone who is familiar math notation...was the abstract of this article easy to understand?

For me, the abstract seems like code but no commentary explaining what each bloc does. But I could be mistaken.

> For me, the abstract seems like code but no commentary explaining what each bloc does. But I could be mistaken.

You are mistaken. (Pretty much) all of mathematics is written as natural language, and those symbols are just abbreviations for stuff that could also be written as words. If I read those sentences out loud, another could write them back down and arrive at something that looks the same.

That's why all of mathematical notation is embedded in sentences - they are

partof the sentence and can be read as such.Further that is really basic notation a first semester student of any STEM discipline should be able to read, though I wouldn't expect them to know what a lattice and some of the other terminology is.

I'd love a "cheat sheet" or dictionary for mathematical notation. I don't know how to pronounce half of the embedded symbology, let alone what rules apply. It seems so esoteric and arbitrary sometimes, though I recognize it's most certainly not.

> It seems so esoteric and arbitrary sometimes

It kind of is since a lot of it grew organically. Often it isn't even consistent across authors/countries. It wasn't really "designed".

The German Wikipedia has a decent overview of symbols and I often use it as a cheat-sheet for LaTeX, whereas the English equivalent of that article has a better explanation of basic notation.

https://en.wikipedia.org/wiki/Glossary_of_mathematical_symbo...

https://de.wikipedia.org/wiki/Liste_mathematischer_Symbole?o...

Even If you don't speak German, the article might be useful because you can follow the links to the individual articles next to the symbols, then change the language back to English. Or use it as a LaTeX reference if you're like me and have trouble remembering some of the more wonky abbreviations it uses. Of course that article isn't comprehensive, but most of the stuff that is missing is very domain-specific.

Thanks for the info. I finished calculus years ago but am very rusty now. I am reviewing the notation links provided, the material is dense, but it is understandable.

This is perfect, can't believe I didn't go to Wikipedia right away (duh!) -- thank you. Into the rabbit hole I go.

You will not be able to understand the notation if you do not understand the math

For context: I'm a computer science MSc student.

The notation is easy to understand (and as far as mathematical notation goes, really quite tame). I don't know what a nearly shortest vector of a lattice is in this context, but I do understand everything else. Note that means I have no idea how the actual method works, but I can understand what's being claimed.

Not an expert at all, but you can think of lattices as evenly-spaced grid points in a vector space. Given a set of basis vectors b0..bn, and arbitrary integers a0..an, a0b0 + ... + anbn are points on the lattice b.

You can have a "good basis" where the norms for b are low, or an equivalent "bad basis" with the same lattice points but with high norms. That's one hard problem (lattice reduction), but there are polynomial-time approximations.

The shortest vector problem, iirc, is to find the vector with the smallest norm in the best possible basis of that lattice.

The first half of the abstract is more akin to declaring the data types and structures used, and the second half is mostly a very high level summary of the overall method and results. It's not supposed to be interpreted like code. It's just setting up the context you need to start interpreting the meat of the paper, and giving you a heads-up about what background topics to Google if anything in the abstract sounds unfamiliar.

For someone who took a matrix calculation (linear algebra) course like me, it was kinda understandable.

Here we have Léo Ducas testing Schnorr's new method in Sage: https://github.com/lducas/SchnorrGate

Apparently, "[t]his suggest that the approach may be sensible, but that not all short vectors give rise to factoring relations, and that obtaining a sufficient success rate requires much larger lattice dimension than claimed in [Sch21]."