10.07.2015 Views

2014-12-94

2014-12-94

2014-12-94

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

FeatureBirch and Bombieri, which in turn used Deligne’s work onthe Riemann hypothesis on varieties); one group focused onthe combinatorics of putting these estimates together to createa distributional estimate on primes; one group focused onoptimising the GPY sieve to convert distributional estimateson primes into a concrete value of k; and one group focusedon finding narrow prime tuples which converted values of kto values of H. We had managed to organise ourselves into asort of factory production line: an advance in, say, the TypeI estimates would be handed over to the combinatorics groupto produce a new distributional estimate in primes, which thesieve team would then promptly convert into a revised valueof k, which the prime tuples team would then use to updatetheir value of H 1 . Steady improvements from all of thesegroups led to new improvements to these values on a dailybasis for several weeks (see michaelnielsen.org/polymath1/index.php?title=Timeline_of_prime_gap_bounds for a timelineof these gains).Many important contributions came from mathematicianswho were not initially involved in the Polymath project. Forinstance, in the first week of June, Janos Pintz released apreprint [33] in which he went through Zhang’s paper andoptimised all the numerical constants, leading to further improvementsin the values of k and H 1 ; within a day or two,the Polymath team had gone through the preprint (correctingsome arithmetic errors along the way), lowering k to26 024 and H 1 to about 280 000. Later, we ran into a problemin which we became victims of our own success: thevalues of k had dropped so low (around 6000) that a certainerror term in the sieve analysis that decayed exponentiallyin k became non-negligible. Pintz, who had followeddevelopments closely, came up with a much more efficientway to truncate the sieve that rendered this error term almostnon-existent (and dropping k almost immediately from6000 to about 5000) and then shared these computations withthe Polymath project. Similarly, Etienne Fouvry, EmmanuelKowalski, Phillipe Michel and Paul Nelson, who had previouslyworked on using Deligne-type exponential sum estimatesto obtain distributional estimates similar to Zhang’sType III estimates, found a simpler approach than Zhang tothese estimates that gave significantly better numerology andthey also donated these arguments to the Polymath projectand began actively participating in the following discussion.Thomas Engelsma, who had made extensive numerical computationsof narrow admissible tuples over a decade ago, alsoopened up his database to our project.By the end of June (with k dipping below 1000 and Hfalling below 7000), our understanding of various componentsof the project had matured significantly, though the paceof discussion was still lively (with perhaps 20 comments eachday, down from a peak of 50 or so each day). A databasehad been set up at math.mit.edu/~primegaps/ to automaticallyrecord the narrowest known prime tuples for a given value ofk (up to k = 5000), which automated the task of convertingk-values to H-values, with several of the more computationallyminded participants competing to add ever narrower tuplesto the database. A prime distribution estimate could similarlybe converted into a value of k with a few lines of Maplecode, using Pintz’s version of the truncated sieve, and a furtherfew lines of code allowed us to automatically convert anyimprovement in the Type I, II, III estimates to a prime distributionestimate in an optimised fashion. The Type II and TypeIII estimates had been optimised to such an extent that theywere no longer the dominant barrier in further improvementof the prime distribution estimates. The Type I estimates werethe only remaining major source of improvement but over thecourse of the next few weeks, several new ideas on optimisingthese estimates (most notably, using the q-van der Corputmethod of Graham and Ringrose [21], combined with thetheory of trace weights arising from the work of Deligne, andcarefully splitting the summations to maximise the effectivenessof the Cauchy-Schwarz inequality) were implemented,leading to the final values of k = 632 and H 1 = 4680.Further numerical improvements looked very hard tocome by and so, for the next few months, the Polymath projectparticipants devoted their efforts to writing up the results,splitting the paper into several sections on a shared Dropboxfolder, with various participants working in separate sectionsin parallel and coordinating their efforts through blog comments.Somewhat to our surprise, once all the various arguments,spread out over a dozen blog posts, had been collatedtogether, the final paper turned out to be extremely large – 163pages, to be exact 4 (for comparison, Zhang’s original paperwas 53 pages in length). Somehow, the Polymath format hadmanaged to efficiently segment the research project into moremanageable chunks, so that no individual participant had toabsorb the entire 163-page argument at any given time whilethe research was ongoing.In late October, when the Polymath paper was goingthrough several rounds of proofreading in preparation for submission,James Maynard announced a major breakthrough[28], in which the prime gap H 1 was lowered substantiallyto 700 (quickly revised downward to 600), with the improvementH 1 ≤ <strong>12</strong> available under the assumption of the Elliott-Halberstam conjecture. Furthermore, Maynard had found away to modify the GPY sieve argument in such a way that thenew distribution estimates on primes (the most difficult andnovel part of Zhang’s work and also of the Polymath project)were no longer needed and, furthermore, the methods couldfor the first time be used to bound H m for larger m. (Ihadalso begun to arrive at similar conclusions independently ofMaynard, though I did not get the precise numerical resultsthat Maynard did.) After some discussion with Maynard andwith the Polymath team, it was decided that we should bothpublish our results separately and then join forces to see if thenew ideas in Maynard’s paper could be combined with thosein the Polymath paper to obtain further improvements (thiseffort was soon dubbed “Polymath8b”).I had some mixed feelings about the continuation of thePolymath8 project; running the Polymath8a project had alreadyconsumed several months of my time and I was thinkingof turning to unrelated research projects. However, theenthusiasm of the other participants and the lure of getting aquick payoff from comparatively brief snatches of mathematicalthought were once again too great to resist.4 After significant revisions in the light of the referee report and subsequentdevelopments, the paper has since been shortened somewhat to 107pages.16 EMS Newsletter December <strong>2014</strong>

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!