Jump to content

Welcome to FutureTimeline.forum
Register now to gain access to all of our features. Once registered and logged in, you will be able to create topics, post replies to existing threads, give reputation to your fellow members, get your own private messenger, post status updates, manage your profile and so much more. If you already have an account, login here - otherwise create an account for free today!

These ads will disappear if you register on the forum

Photo

Math News and Dicussion


  • Please log in to reply
45 replies to this topic

#21
Unity

Unity

    Information Organism

  • Members
  • PipPipPipPipPipPipPipPip
  • 2,437 posts
/* This is a review of a paper that was collaborated on by Peter Schloze who is being considered for the 2018 fields medal (the "Nobel of Maths") for his work in number theory and geometry */
 
An exceptional review of a paper by Bhatt and Scholze on étale topology
 
Posted on June 26, 2016 by Edward Dunne
 
Another great review.  Here Pieter Belmans reviews a paper by Bhatt and Scholze on étale topology.  Before describing the authors’ work, Belmans tells us where étale topology comes from and why some news ideas might be necessary.  He then gives a quick description of what Bhatt and Scholze are doing and why it is a good thing.  Once the history and context are in place, Belmans goes through the contents of the paper, with plenty of comments to help the reader.  He concludes by giving a reference to the Stacks Project, where you can find out lots more about pro-étale cohomology.
 
Belmans is a mathematician and a coder. He is very involved in the Stacks Project.   On a smaller scale, he wrote a handy python script that helps MathSciNet users obtain BibTeX versions of the references in a LaTeX file.  This is an update to a clunky shell script to do the same thing.
 
MR3379634
 
Bhatt, Bhargav(1-MI); Scholze, Peter(D-BONN)
 
The pro-étale topology for schemes. (English, French summary)
 
Astérisque No. 369 (2015), 99–201. ISBN: 978-2-85629-805-3
14F05 (14F20 14F35 14H30 18B25)
 
One of the reasons for the introduction of the étale topology was the definition of the ℓℓ-adic Weil cohomology theory. In trying to mimic the approach from algebraic topology by using constant sheaves on varieties over some (algebraically closed) field kk, the need for a topology finer than the Zariski topology arises, and the étale topology is a reasonable candidate for this. Unfortunately one easily shows that things only work as intended for torsion sheaves, while the goal is to obtain coefficients in a field (of characteristic zero), such as Q¯¯¯¯ℓQ¯ℓ, where ℓℓ is prime to the characteristic of kk. So taking sheaf cohomology of the constant sheaf associated to Q¯¯¯¯ℓQ¯ℓ does not yield satisfactory results. Nevertheless, it is possible to rectify the situation, by defining ℓℓ-adic cohomology as the inverse limit of the cohomology of Z/ℓnZZ/ℓnZ in the étale topology, and tensoring it with Q¯¯¯¯ℓQ¯ℓ. It can be shown that this indeed satisfies the axioms for a Weil cohomology theory.
 
The price one pays for this approach is that one is not working directly with étale sheaves of ZℓZℓ– or Q¯¯¯¯ℓQ¯ℓ-modules, but rather with pro-sheaves of Z/ℓnZZ/ℓnZ-modules. Therefore the usual yoga of setting up sheaf cohomology does not work: one does not get abelian categories and injective resolutions for free. Also checking that the functors constructed in this ad hoc fashion indeed satisfy the axioms for a Weil cohomology theory is hard, because they are not derived functors as such. This becomes only more problematic when one tries to study ℓℓ-adic sheaves in a relative setting, where one would like to have an appropriate triangulated category that behaves like the derived category of sheaves. These were obtained by Deligne and later T. Ekedahl [in The Grothendieck Festschrift, Vol. II, 197–218, Progr. Math., 87, Birkhäuser Boston, Boston, MA, 1990; MR1106899] along the lines of Grothendieck’s ad hoc definition, whilst U. Jannsen’s continuous étale cohomology [Math. Ann. 280 (1988), no. 2, 207–245; MR0929536] gives a theory of ℓℓ-adic cohomology for non-algebraically closed base fields, where Mittag-Leffler-type conditions are not readily available.
 
The paper under review shows how one can overcome these difficulties by changing the underlying site. Indeed, the main result is that ℓℓ-adic cohomology truly is sheaf cohomology in the pro-étale topos, and that the derived category one obtains naturally in this way is indeed equivalent to the triangulated category constructed in an ad hoc fashion. The reason for this is that this new site has better behaviour with respect to inverse limits, thereby eliminating many of the technicalities one encounters in the classical approach.
 
In section 2 the theory of weakly étale and pro-étale morphisms of rings is introduced in a detailed fashion. A weakly étale morphism of rings (also called absolutely flat) is a flat morphism whose diagonal is also flat. A pro-étale morphism is an inductive limit of étale morphisms of rings. One easily shows that étale implies pro-étale implies weakly étale implies formally étale. So these two new notions are weakenings of the finiteness conditions for étale morphisms, and various properties are proven.
 
In section 3 the notion of replete and locally weakly contractible topos is introduced. A replete topos is one where surjections are closed under sequential limits. It is precisely this property that makes inverse limits behave just like in the category of sets. One sees by virtue of an easy example that the usual topologies (such as Zariski, étale or fppf) certainly do not satisfy this property, whilst the fpqc topology (with the usual appropriate set-theoretical precautions) defines a replete topos.
 
The notion of locally weakly contractible topos is a strengthening of repleteness, and ensures for instance that the derived category of abelian objects is compactly generated. In the next section it will be shown that the pro-étale topology defines such a locally weakly contractible topos. Another important property of replete toposes is that their derived categories are left-complete, which ensures unbounded cohomological descent, without any conditions on the cohomological dimension, which are usually needed for the étale site. The remainder of the section is dedicated to studying the behaviour of (derived) completions of rings, first in absolute generality and later specialised to the case of Noetherian constant rings.
 
In section 4 one is given the definition of the pro-étale topology for schemes: one takes the category of weakly étale schemes over the base, and takes covers from the fpqc topology (suitably taking care of set-theoretical issues, as for the fpqc topology). The reason for using weakly étale morphisms is that being pro-étale is not Zariski local on the target, but the authors show that every weakly étale morphism f:X→Yf:X→Y is Zariski locally on XX and locally for the pro-étale topology onYY a pro-étale morphism of rings. In other words: the pro-étale site is an analogue of the small étale site, but the objects are weakly étale whilst the coverings are fpqc coverings. The main result is that the pro-étale site is subcanonical, generated by affines, and that the topos is locally weakly contractible as defined in the previous section.
 
Section 5 is dedicated to the comparison of the étale and the pro-étale topos. Because every étale map is also weakly étale, we get a morphism of toposesν:Shv(Xpro-ét)→Shv(Xét)ν:Shv(Xpro-ét)→Shv(Xét). It is shown that ν∗ν∗ is fully faithful for sheaves of sets and sheaves of abelian groups, and for bounded below complexes. For unbounded complexes the issue regarding left-completeness appears, and extra care is needed, but one can show that (an appropriate subcategory of) the unbounded derived category from the pro-étale site realises the left completion of the unbounded derived category from the étale site. Finally a comparison with Ekedahl’s and Jannsen’s theory is given.
 
The main motivation for the work can be found in section 6, where the notions of constructible sheaves on the two toposes are compared. It is shown how the pro-étale approach has completely analogous recollement properties for closed subschemes (resp. descriptions of derived categories supported on locally closed constructible subsets). Then constructibility in the étale topology is recalled, and it is shown how constructible complexes form precisely the compact objects of the derived category of AA-modules, where AA is a ring such that affines have bounded cohomological dimension with respect to this ring. Over an algebraically closed field this condition is satisfied for torsion coefficients, which is precisely what makes the usual theory work as intended. The authors then compare all this to constructibility in the pro-étale topology, in particular on Noetherian schemes. Finally it is shown how one obtains a six-functor formalism and that the machinery forQ¯¯¯¯ℓQ¯ℓ-sheaves indeed gives equivalent triangulated categories.
 
Finally, section 7 develops the theory of fundamental groups in the pro-étale topology. In the étale context one has a profinite fundamental group defined in SGA1 and a prodiscrete fundamental group defined in SGA3. The main result is that these are the profinite (resp. prodiscrete) completions of the pro-étale fundamental group; hence this theory recovers the earlier constructions.
 
The text is very well written, and contains many instructive examples and references. It gives ample motivation for the approach that is taken, and the resulting machinery is indeed beautiful, with the strong relation to (and various improvements of) the classical theory being the main theme of the work. At the moment it is the main reference text on the definition of the pro-étale topology, besides the development of some of the tools and results in the Stacks Project, tag 0965.
 
Reviewed by Pieter Belmans


#22
Unity

Unity

    Information Organism

  • Members
  • PipPipPipPipPipPipPipPip
  • 2,437 posts
Michael Atiyah’s Imaginative State of Mind
 
At 86, Britain’s preeminent mathematical matchmaker is still tackling the big questions and dreaming of a union between the quantum and the gravitational forces.
 
By Siobhan Roberts
 
March 3, 2016
 
Despite Michael Atiyah’s many accolades — he is a winner of both the Fields and the Abel prizes for mathematics; a past president of the Royal Society of London, the oldest scientific society in the world (and a past president of the Royal Society of Edinburgh); a former master of Trinity College, Cambridge; a knight and a member of the royal Order of Merit; and essentially Britain’s mathematical pope — he is nonetheless perhaps most aptly described as a matchmaker. He has an intuition for arranging just the right intellectual liaisons, oftentimes involving himself and his own ideas, and over the course of his half-century-plus career he has bridged the gap between apparently disparate ideas within the field of mathematics, and between mathematics and physics.
 
One day in the spring of 2013, for instance, as he sat in the Queen’s Gallery at Buckingham Palace awaiting the annual Order of Merit luncheon with Elizabeth II, Sir Michael made a match for his lifelong friend and colleague, Sir Roger Penrose, the great mathematical physicist.
 
Penrose had been trying to develop his “twistor” theory, a path toward quantum gravity that’s been in the works for nearly 50 years. “I had a way of doing it which meant going out to infinity,” Penrose said, “and trying to solve a problem out there, and then coming back again.” He thought there must be a simpler way. And right then and there Atiyah put his finger on it, suggesting Penrose make use of a type of “noncommutative algebra.”
 
“I thought, ‘Oh, my God,’” Penrose said. “Because I knew there was this noncommutative algebra which had been sitting there all this time in twistor theory. But I hadn’t thought of using it in this particular way. Some people might have just said, ‘That won’t work.’ But Michael could immediately see that there was a way in which you could make it work, and exactly the right thing to do.” Given the venue where Atiyah made the suggestion, Penrose dubbed his improved idea “palatial twistor theory.”

 

This is the power of Atiyah. Roughly speaking, he has spent the first half of his career connecting mathematics to mathematics, and the second half connecting mathematics to physics.
 
Atiyah is best known for the “index theorem,” devised in 1963 with Isadore Singer of the Massachusetts Institute of Technology (and properly called the Atiyah-Singer index theorem), connecting analysis and topology — a fundamental connection that proved to be important in both mathematical fields, and later in physics as well. Largely for this work, Atiyah won the Fields Medal in 1966 and the Abel Prize in 2004 (with Singer).
 
In the 1980s, methods gleaned from the index theorem unexpectedly played a role in the development of string theory — an attempt to reconcile the large-scale realm of general relativity and gravity with the small-scale realm of quantum mechanics — particularly with the work of Edward Witten, a string theorist at the Institute for Advanced Study in Princeton, N.J. Witten and Atiyah began an extended collaboration, and in 1990 Witten won the Fields Medal, the only physicist ever to win the prize, with Atiyah as his champion.
 
Now, at age 86, Atiyah is hardly lowering the bar. He’s still tackling the big questions, still trying to orchestrate a union between the quantum and the gravitational forces. On this front, the ideas are arriving fast and furious, but as Atiyah himself describes, they are as yet intuitive, imaginative, vague and clumsy commodities.
 
Still, he is relishing this state of free-flowing creativity, energized by his packed schedule. In hot pursuit of these current lines of investigation and contemplation, last December he delivered a double-header of lectures, back-to-back on the same day, at the University of Edinburgh, where he has been an honorary professor since 1997. He is keen to share his new ideas and, he hopes, attract supporters. To that end, in November he hosted a conference at the Royal Society of Edinburgh on “The Science of Beauty.” Quanta Magazine sat down with Atiyah at the Royal Society gathering and afterward, whenever he slowed down long enough to take questions. What follows is an edited version of those catch-as-catch-can conversations.
 
QUANTA MAGAZINE: Where do you trace the beginnings of your interest in beauty and science?
 
MICHAEL ATIYAH: I was born 86 years ago. That’s when my interest started. I was conceived in Florence. My parents were going to name me Michelangelo, but someone said, “That’s a big name for a small boy.” It would have been a disaster. I can’t draw. I have no talent at all.
 
You mentioned that something “clicked” during Roger Penrose’s lecture on “The Role of Art in Mathematics” and that you now have an idea for a collaborative paper. What is this clicking, the process or the state — can you describe it?
 
It’s the kind of thing that once you’ve seen it, the truth or veracity, it just stares you in the face. The truth is looking back at you. You don’t have to look for it. It’s shining on the page.
 
Is that generally how your ideas arrive?
 
This was a spectacular version. The crazy part of mathematics is when an idea appears in your head. Usually when you’re asleep, because that’s when you have the fewest inhibitions. The idea floats in from heaven knows where. It floats around in the sky; you look at it, and admire its colors. It’s just there. And then at some stage, when you try to freeze it, put it into a solid frame, or make it face reality, then it vanishes, it’s gone. But it’s been replaced by a structure, capturing certain aspects, but it’s a clumsy interpretation.
 
Have you always had mathematical dreams?
 
I think so. Dreams happen during the daytime, they happen at night. You can call them a vision or intuition. But basically they’re a state of mind — without words, pictures, formulas or statements. It’s “pre” all that. It’s pre-Plato. It’s a very primordial feeling. And again, if you try to grasp it, it always dies. So when you wake up in the morning, some vague residue lingers, the ghost of an idea. You try to remember what it was and you only get half of it right, and maybe that’s the best you can do.
 
 
Is imagination part of it?
 
Absolutely. Time travel in the imagination is cheap and easy — you don’t even need to buy a ticket. People go back and imagine they are part of the Big Bang, and then they ask the question of what came before.
 
What guides the imagination — beauty?
 
It’s not the kind of beauty that you can point to — it’s beauty in a much more abstract sense.
 
Not too long ago you published a study, with Semir Zeki, a neurobiologist at University College London, and other collaborators, on The Experience of Mathematical Beauty and Its Neural Correlates.
 
That’s the most-read article I’ve ever written! It’s been known for a long time that some part of the brain lights up when you listen to nice music, or read nice poetry, or look at nice pictures — and all of those reactions happen in the same place [the “emotional brain,” specifically the medial orbitofrontal cortex]. And the question was: Is the appreciation of mathematical beauty the same, or is it different? And the conclusion was, it is the same. The same bit of the brain that appreciates beauty in music, art and poetry is also involved in the appreciation of mathematical beauty. And that was a big discovery.
 
You reached this conclusion by showing mathematicians various equations while a functional MRI recorded their response. Which equation won out as most beautiful? 
 
Ah, the most famous of all, Euler’s equation:
 
EulerId_blog.jpg
 
It involves π; the mathematical constant e [Euler’s number, 2.71828 …]; i, the imaginary unit; 1; and 0 — it combines all the most important things in mathematics in one formula, and that formula is really quite deep. So everybody agreed that that was the most beautiful equation. I used to say it was the mathematical equivalent of Hamlet’s phrase “To be, or not to be” — very short, very succinct, but at the same time very deep. Euler’s equation uses only five symbols, but it also encapsulates beautifully deep ideas, and brevity is an important part of beauty.
 
You are especially well-known for two supremely beautiful works, not only the index theorem but also K-theory, developed with the German topologist Friedrich Hirzebruch. Tell me about K-theory.
 
The index theorem and K-theory are actually two sides of the same coin. They started out different, but after a while they became so fused together that you can’t disentangle them. They are both related to physics, but in different ways.
 
K-theory is the study of flat space, and of flat space moving around. For example, let’s take a sphere, the Earth, and let’s take a big book and put it on the Earth and move it around. That’s a flat piece of geometry moving around on a curved piece of geometry. K-theory studies all aspects of that situation — the topology and the geometry. It has its roots in our navigation of the Earth.
 
The maps we used to explore the Earth can also be used to explore both the large-scale universe, going out into space with rockets, and the small-scale universe, studying atoms and molecules. What I’m doing now is trying to unify all that, and K-theory is the natural way to do it. We’ve been doing this kind of mapping for hundreds of years, and we’ll probably be doing it for thousands more.
 
Did it surprise you that K-theory and the index theorem turned out to be important in physics?
 
Oh, yes. I did all this geometry not having any notion that it would be linked to physics. It was a big surprise when people said, “Well, what you’re doing is linked to physics.” And so I learned physics quickly, talking to good physicists to find out what was happening.
 
How did your collaboration with Witten come about?
 
I met him in Boston in 1977, when I was getting interested in the connection between physics and mathematics. I attended a meeting, and there was this young chap with the older guys. We started talking, and after a few minutes I realized that the younger guy was much smarter than the old guys. He understood all the mathematics I was talking about, so I started paying attention to him. That was Witten. And I’ve kept in touch with him ever since.
 
What was he like to work with?
 
In 2001, he invited me to Caltech, where he was a visiting professor. I felt like a graduate student again. Every morning I would walk into the department, I’d go to see Witten, and we’d talk for an hour or so. He’d give me my homework. I’d go away and spend the next 23 hours trying to catch up. Meanwhile, he’d go off and do half a dozen other things. We had a very intense collaboration. It was an incredible experience because it was like working with a brilliant supervisor. I mean, he knew all the answers before I got them. If we ever argued, he was right and I was wrong. It was embarrassing!
 
You’ve said before that the unexpected interconnections that pop up occasionally between math and physics are what appeal to you most — you like finding yourself wading into unfamiliar territory.
 
Right; well, you see, a lot of mathematics is predictable. Somebody shows you how to solve one problem, and you do the same thing again. Every time you take a step forward you’re following in the steps of the person who came before. Every now and again, somebody comes along with a totally new idea and shakes everybody up. To start with, people don’t believe it, and then when they do believe it, it leads in a totally new direction. Mathematics comes in fits and starts. It has continuous development, and then it has discontinuous jumps, when suddenly somebody has a new idea. Those are the ideas that really matter. When you get them, they have major consequences. We’re about due another one. Einstein had a good idea 100 years ago, and we need another one to take us forward.
 
But the approach has to be more investigative than directive. If you try to direct science, you only get people going in the direction you told them to go. All of science comes from people noticing interesting side paths. You’ve got to have a very flexible approach to exploration and allow different people to try different things. Which is difficult, because unless you jump on the bandwagon, you don’t get a job.
 
Worrying about your future, you have to stay in line. That’s the worst thing about modern science. Fortunately, when you get to my age, you don’t need to bother about that. I can say what I like.
 
These days, you’re trying out some new ideas in hopes of breaking the stalemate in physics?
 
Well, you see, there’s atomic physics — electrons and protons and neutrons, all the stuff of which atoms are made. At these very, very, very small scales, the laws of physics are much the same, but there is also a force you ignore, which is the gravitational force. Gravity is present everywhere because it comes from the entire mass of the universe. It doesn’t cancel itself out, it doesn’t have positive or negative value, it all adds up. So however far away the black holes and galaxies are, they all exert a very small force everywhere in the universe, even in an electron or proton. But physicists say, “Ah, yes, but it’s so small you can ignore it; we don’t measure things that small, we do perfectly well without it.” My starting point is that that is a mistake. If you correct that mistake, you get a theory that is much better.
 
I’m now looking again at some of the ideas that were around 100 years ago and that were discarded at the time because people couldn’t understand what the ideas were trying to get at. How does matter interact with gravity? Einstein’s theory was that if you put a bit of matter in, it changes the curvature of space. And when the curvature of space changes, it acts on the matter. It’s a very complicated feedback mechanism.
 
I’m going back to Einstein and [Paul] Dirac and looking at them again with new eyes, and I think I’m seeing things that people missed. I’m filling in the holes of history, taking account of new discoveries. Archaeologists dig things up, or historians find a new manuscript, and that sheds an entirely new light. So that’s what I’ve been doing. Not by going into libraries, but by sitting in my room at home, thinking. If you think long enough, you get a good idea.
 
So you’re saying that the gravitational force can’t be ignored?
 
I think all the difficulty physicists have had comes from ignoring that. You shouldn’t ignore it. And the point is, I believe the mathematics gets simplified if you feed it in. If you leave it out, you make things more difficult for yourself.
 
Most people would say you don’t need to worry about gravitation when you look at atomic physics. The scale is so small that, for the kind of calculations we do, it can be ignored. In some sense, if you just want answers, that’s correct. But if you want understanding, then you’ve made a mistake in that choice.
 
If I’m wrong, well, I made a mistake. But I don’t think so. Because once you pick this idea up, there are all sorts of nice consequences. The mathematics fits together. The physics fits together. The philosophy fits together.
 
What does Witten think of your new ideas?
 
Well, it’s a challenge. Because when I talked to him in the past about some of my ideas, he dismissed them as hopeless, and he gave me 10 different reasons why they’re hopeless. Now I think I can defend my ground. I’ve spent a lot of time thinking, coming at it from different angles, and coming back to it. And I’m hoping I can persuade him that there is merit to my new approach.
 
You’re risking your reputation, but you think it’s worth it.
 
My reputation is established as a mathematician. If I make a mess of it now, people will say, “All right, he was a good mathematician, but at the end of his life he lost his marbles.”
 
A friend of mine, John Polkinghorne, left physics just as I was going in; he went into the church and became a theologian. We had a discussion on my 80th birthday and he said to me, “You’ve got nothing to lose; you just go ahead and think what you think.” And that’s what I’ve been doing. I’ve got all the medals I need. What could I lose? So that’s why I’m prepared to take a gamble that a young researcher wouldn’t be prepared to take.
 
Are you surprised to be so charged up about new ideas at this stage of your career?
 
One of my sons said to me, “Impossible, Dad. Mathematicians do all their best work by the time they’re 40. And you’re over 80. It’s impossible for you to have a good idea now.”
 
If you’re still awake and alert mentally when you’re over 80, you’ve got the advantage that you’ve lived a long time and you’ve seen many things, and you get perspective. I’m 86 now, and it’s in the last few years that I’ve had these ideas. New ideas come along and you pick up bits here and there, and the time is ripe now, whereas it might not have been ripe five or 10 years ago.
 
Is there one big question that has always guided you? 
 
I always want to try to understand why things work. I’m not interested in getting a formula without knowing what it means. I always try to dig behind the scenes, so if I have a formula, I understand why it’s there. And understanding is a very difficult notion.
 
People think mathematics begins when you write down a theorem followed by a proof. That’s not the beginning, that’s the end. For me the creative place in mathematics comes before you start to put things down on paper, before you try to write a formula. You picture various things, you turn them over in your mind. You’re trying to create, just as a musician is trying to create music, or a poet. There are no rules laid down. You have to do it your own way. But at the end, just as a composer has to put it down on paper, you have to write things down. But the most important stage is understanding. A proof by itself doesn’t give you understanding. You can have a long proof and no idea at the end of why it works. But to understand why it works, you have to have a kind of gut reaction to the thing. You’ve got to feel it.


#23
Unity

Unity

    Information Organism

  • Members
  • PipPipPipPipPipPipPipPip
  • 2,437 posts
Hope Rekindled for Perplexing Proof
 
Three years ago, a solitary mathematician released an impenetrable proof of the famous abc conjecture. At a recent conference dedicated to the work, optimism mixed with bafflement.
 

By Kevin Hartnett
 
December 21, 2015
 
Earlier this month the math world turned toward the University of Oxford, looking for signs of progress on a mystery that has gripped the community for three years.
 
The occasion was a conference on the work of Shinichi Mochizuki, a brilliant mathematician at Kyoto University who in August 2012 released four papers that were both difficult to understand and impossible to ignore. He called the work “inter-universal Teichmüller theory” (IUT theory) and explained that the papers contained a proof of the abc conjecture, one of the most spectacular unsolved problems in number theory.
 
Within days it was clear that Mochizuki’s potential proof presented a virtually unprecedented challenge to the mathematical community. Mochizuki had developed IUT theory over a period of nearly 20 years, working in isolation. As a mathematician with a track record of solving hard problems and a reputation for careful attention to detail, he had to be taken seriously. Yet his papers were nearly impossible to read. The papers, which ran to more than 500 pages, were written in a novel formalism and contained many new terms and definitions. Compounding the difficulty, Mochizuki turned down all invitations to lecture on his work outside of Japan. Most mathematicians who attempted to read the papers got nowhere and soon abandoned the effort.
 
For three years, the theory languished. Finally, this year, during the week of December 7, some of the most prominent mathematicians in the world gathered at the Clay Mathematical Institute at Oxford in the most significant attempt thus far to make sense of what Mochizuki had done. Minhyong Kim, a mathematician at Oxford and one of the three organizers of the conference, explains that the attention was overdue.
 
“People are getting impatient, including me, including [Mochizuki], and it feels like certain people in the mathematical community have a responsibility to do something about this,” Kim said. “We do owe it to ourselves and, personally as a friend, I feel like I owe it to Mochizuki as well.”
 
The conference featured three days of preliminary lectures and two days of talks on IUT theory, including a culminating lecture on the fourth paper, where the proof of abc is said to arise. Few entered the week expecting to leave with a complete understanding of Mochizuki’s work or a clear verdict on the proof.  What they did hope to achieve was a sense of the strength of Mochizuki’s work. They wanted to be convinced that the proof contains powerful new ideas that would reward further exploration.
 

For the first three days, those hopes only grew.
 
A New Strategy
 
The abc conjecture describes the relationship between the three numbers in perhaps the simplest possible equation: a + b = c, for positive integers a, b and c. If those three numbers don’t have any factors in common apart from 1, then when the product of their distinct prime factors is raised to any fixed exponent larger than 1 (for example, exponent 1.001) the result is larger than c with only finitely many exceptions. (The number of exceptional triples a, b, c violating this condition depends on the chosen exponent.)
 
The conjecture cuts deep into number theory because it posits an unexpected relationship between addition and multiplication. Given three numbers, there’s no obvious reason why the prime factors of a and b would constrain the prime factors of c.
 
Until Mochizuki released his work, little progress had been made towards proving the abc conjecture since it was proposed in 1985. However, mathematicians understood early on that the conjecture was intertwined with other big problems in mathematics. For instance, a proof of the abc conjecture would improve on a landmark result in number theory. In 1983, Gerd Faltings, now a director of the Max Planck Institute for Mathematics in Bonn, Germany, proved the Mordell conjecture, which asserts that there are only finitely many rational solutions to certain types of algebraic equations, an advance for which he won the Fields Medal in 1986. Several years later Noam Elkies of Harvard University demonstrated that a proof of abc would make it possible to actually find those solutions.
 
“Faltings’ theorem was a great theorem, but it doesn’t give us any way to find the finite solutions,” Kim said, “so abc, if it’s proved in the right form, would give us a way to [improve] Faltings’ theorem.”
 
The abc conjecture is also equivalent to Szpiro’s conjecture, which was proposed by the French mathematician Lucien Szpiro in the 1980s. Whereas the abc conjecture describes an underlying mathematical phenomenon in terms of relationships between integers, Szpiro’s conjecture casts that same underlying relationship in terms of elliptic curves, which give a geometric form to the set of all solutions to a type of algebraic equation.
 

The translation from integers to elliptic curves is a common one in mathematics. It makes a conjecture more abstract and more complicated to state, but it also allows mathematicians to bring more techniques to bear on the problem. The strategy worked for Andrew Wiles when he proved Fermat’s Last Theorem in 1994. Rather than working with the famously simple but constraining formulation of the problem (which states that there is no solution in positive integers to the equation an +bn = cn for any integer value of n greater than 2), he translated it twice over: once into a statement about elliptic curves and then into a statement about another type of mathematical object called “Galois representations” of elliptic curves. In the land of Galois representations, he was able to generate a proof that he could apply to the original statement of the problem.
 
Mochizuki employed a similar strategy in his work on abc. Rather than proving abc directly, he set out to prove Szpiro’s conjecture. And to do so, he first encoded all the relevant information from Szpiro’s conjecture in terms of a new class of mathematical objects of his own invention called Frobenioids.
 
Before Mochizuki began working on IUT theory, he spent a long time developing a different type of mathematics in pursuit of an abc proof. He called that line of thought “Hodge-Arakelov theory of elliptic curves.” It ultimately proved inadequate to the task. But in the process of creating it, he developed the idea of the Frobenioid, which is an algebraic structure extracted from a geometric object.
 
To understand how this works, consider a square with the corners labeled A, B, C and D, with corner A in the lower right and corner B in the upper right. The square can be manipulated in a number of ways that preserve its physical location. For example, it can be rotated by 90 degrees counterclockwise, so that the arrangement of the labeled corners, starting from the lower right, ends up as (D, A, B, C). Or it can be rotated 180, 270 or 360 degrees, or flipped across either of its diagonals.
 
Each manipulation that preserves its physical location is called a symmetry of the square. All squares have eight such symmetries. To keep track of the different symmetries, mathematicians might impose an algebraic structure on the collection of all ways to label the corners. This structure is called a “group.” But as the group becomes freed from the geometric constraints of a square, it acquires new symmetries. No set of rigid motions will get you a square that can be labeled (A, C, B, D), since in the geometric square, A always has to be adjacent to B. Yet the labels in the group can be rearranged any way you want — 24 different ways in all.
 

Thus the algebraic group of the symmetries of the labels actually contains three times as much information as the geometric object that gave rise to it. For geometric objects more complicated than squares, such additional symmetries lead mathematicians to insights that are inaccessible if they use only the original geometry.
 
Frobenioids work in much the same way as the group described above. Instead of a square, they are an algebraic structure extracted from a special kind of elliptic curve. Just as in the example above, Frobenioids have symmetries beyond those arising from the original geometric object. Mochizuki expressed much of the data from Szpiro’s conjecture — which concerns elliptic curves — in terms of Frobenioids. Just as Wiles moved from Fermat’s Last Theorem to elliptic curves to Galois representations, Mochizuki worked his way from the abc conjecture to Szpiro’s conjecture to a problem involving Frobenioids, at which point he aimed to use the richer structure of Frobenioids to obtain a proof.
 
“From Mochizuki’s point of view, it’s all about looking for a more fundamental reality that lies behind the numbers,” Kim said. At each additional level of abstraction, previously hidden relationships come into view. “Many more things are related at an abstract level than they are at a concrete level,” he said.
 
In presentations at the end of the third day and first thing on the fourth day, Kiran Kedlaya, a number theorist at the University of California, San Diego, explained how Mochizuki intended to use Frobenioids in a proof of abc. His talks clarified a central concept in Mochizuki’s method and generated the most significant progress at the conference thus far. Faltings, who was Mochizuki’s doctoral adviser, wrote in an email that he found Kedlaya’s talks “inspiring.”
 
“Kedlaya’s talk was the mathematical high point of the meeting,” said Brian Conrad, a number theorist at Stanford University who attended the conference. “I wrote to a lot of people on Wednesday evening to say, wow, this thing came up in Kedlaya’s talk, so on Thursday we’re probably going to see something very interesting.”
 
It wasn’t to be.
 
‘Good Confusion’
 
The understanding that Mochizuki had recast abc in terms of Frobenioids was a surprising and intriguing development. By itself, though, it didn’t say much about what a final proof would look like.
 
Kedlaya’s exposition of Frobenioids had provided the assembled mathematicians with their first real sense of how Mochizuki’s techniques might circle back to the original formulation of Szpiro’s conjecture. The next step was the essential one — to show how the reformulation in terms of Frobenioids made it possible to bring genuinely new and powerful techniques to bear on a potential proof.
 

These techniques appear in Mochizuki’s four IUT theory papers, which were the subject of the last two days of the conference. The job of explaining those papers fell to Chung Pang Mok of Purdue University and Yuichiro Hoshi and Go Yamashita, both colleagues of Mochizuki’s at the Research Institute for Mathematical Sciences at Kyoto University. The three are among a small handful of people who have devoted intense effort to understanding Mochizuki’s IUT theory. By all accounts, their talks were impossible to follow.
 
Felipe Voloch, a number theorist at the University of Texas, Austin, attended the conference and posted updates throughout the five days on the social-media site Google Plus. Like Conrad, he went into the Thursday talks anticipating a breakthrough — one that never came. Later that fourth day he wrote, “At the afternoon tea break, everybody was confused. I asked many people and nobody had a clue.” Conrad echoes that sentiment, explaining that the talks were a blizzard of technical terms.
 
“The reason it fell apart is not meant as a reflection of anything with Mochizuki,” he said. “I mean, far too much information was thrown at the audience in far too little time. I spoke with every participant there who was not previously involved in this work and we were all completely and totally lost.”
 
The failure of the final talks to communicate how Frobenioids are used in IUT theory was partly to be expected, according to some participants.
 
“I think there was some hope that we’d be able to follow the trail all the way through to the end, but frankly the material gets substantially more difficult at that point,” Kedlaya said. “It’s not entirely the fault of the speakers who came after me.”
 
Kim thinks the trouble with the final talks is due in part to cultural differences. Yamashita and Hoshi are both Japanese; Kim explains that in Japan, mathematicians are more accustomed to dealing with a steady succession of technical definitions in presentations. “That was one situation where cultural differences really did play something of a role,” Kim said. “Many dense slides requiring a good deal of patience and focus — that kind of thing is more acceptable in Japan. People are more used to a dialectic, interactive style when you go to a lecture in the U.S.”
 
While the conference did not yield an unequivocal outcome (as few people really expected it to do), it did produce real, if incremental, progress. Kedlaya said afterward that he felt motivated to correspond with others who have read more of IUT theory and that he planned to attend the next conference on the topic, in July at Kyoto University.
 
“I’m not unhappy with the amount of progress that was made,” Kedlaya said. “We wanted more, but I think it’s worth the effort of this community to take at least one more run at this and see if we can get further.”
 
Others think the onus remains on Mochizuki to better explain his work. “[I] got the impression that unless Mochizuki himself writes a readable paper, the matter will not be resolved,” Faltings said by email.
 
Kim is less certain that this step will be necessary. After everyone had left Oxford, he reflected on the confusion the attendees took home with them. As he saw it, it was good confusion, the kind that develops when you’re on your way to learning something.
 
“Prior to the workshop I would say most people who came generally had no idea of what the author was attempting in the IUT papers,” he said. “Last week people were still confused, but they had a pretty concrete outline of what the author was trying to do. How does he do it? That was a vague question. Now there are many more questions, but they’re much more sophisticated kinds of questions.”
 


#24
Unity

Unity

    Information Organism

  • Members
  • PipPipPipPipPipPipPipPip
  • 2,437 posts
Math Quartet Joins Forces on Unified Theory
 
A new breakthrough that bridges number theory and geometry is just the latest triumph for a close-knit group of mathematicians.
 
4Theorists_1200.jpg
 
The mathematicians Wei Zhang, Xinwen Zhu, Zhiwei Yun and Xinyi Yuan.
 
By Kevin Hartnett
 
December 8, 2015
 
One of the first collaborations Xinyi Yuan and Wei Zhang ever undertook was a trip to the Social Security office. It was the fall of 2004 and the two of them were promising young graduate students in mathematics at Columbia University. They were also friends from their college years at Peking University in Beijing. Yuan had come to Columbia a year earlier than Zhang, and now he was helping his friend get a Social Security number. The trip did not go well.
 
“We went there, and we were told that some document of Wei’s was missing and that he couldn’t do it at that time,” Yuan recalled.
 
That failed attempt was one of the few unsuccessful team efforts the two have undertaken since coming to the U.S. Zhang, who is now a professor at Columbia, and Yuan, now an assistant professor at the University of California, Berkeley, are members of an unofficial quartet of Chinese mathematicians who have been friends since their undergraduate days at Peking University in the early 2000s and now hold positions in some of the best mathematics departments in the world.
 
That a number of elite mathematicians would come out of the same class at a top university is unusual, but not unprecedented. The most recent example is Manjul Bhargava, Kiran Kedlaya and Lenny Ng, freshman classmates at Harvard University who went on to become distinguished mathematicians. They remain good friends and all traveled to Seoul in 2014 when Bhargava won the Fields Medal.
 
What’s unusual about the group formed by Zhang, Yuan and their two friends is the degree to which they continue to collaborate and the extraordinary amount of successes that they’ve had.
 
“They are not only good, they work in almost the same areas, and because they learned together, they influenced each other, and even as mature mathematicians they’re collaborative,” said Shou-Wu Zhang, a mathematician at Princeton University who knows all four and was influential in recruiting Zhang and Yuan to study in the U.S.
 
In addition to Zhang and Yuan, the other members of the group are Zhiwei Yun, an associate professor at Stanford University, and Xinwen Zhu, an associate professor at the California Institute of Technology. Yun and Zhu work in the field of algebraic geometry, while Zhang and Yuan work in number theory. This split in fields provides them with complementary perspectives on what is probably the single biggest project in mathematics, the Langlands program, which has been described by the Berkeley mathematician Edward Frenkel (who was Zhu’s graduate adviser) as “a kind of grand unified theory of mathematics.” The program, first envisioned by the mathematician Robert Langlands in the late 1960s, seeks to draw connections between number theory and geometry, so as to use tools from one field to make discoveries in the other.
 
One obstacle to pursuing the Langlands program is that it’s difficult for a single mathematician to know both fields deeply enough to see all the connections between the two. Yet mathematicians from different fields may have trouble communicating with one another. The best collaborations involve mathematicians who have deep knowledge of  different fields, but who also know just enough in common to talk to each other.
 
That is the case with these four mathematicians. They are all individually talented, and each has pursued his own research interests over the years. But they are also close friends with a shared background and a similar approach to mathematics. This has allowed them to prompt each other, teach each other, and make discoveries together that they might not otherwise have made so easily. These include several smaller papers they’ve written in tandem and now, most recently, their biggest collaborative discovery yet — a forthcoming result by Zhang and Yun that’s already being hailed as one of the most exciting breakthroughs in an important area of number theory in the last 30 years.
 
The Early Years
 
Before their mathematical abilities drew them together, the four grew up in different parts of China. Zhu is from Chengdu, a provincial capital in the southwest. Yun grew up in a town outside Shanghai called Changzhou. At first he was more interested in calligraphy than math. Then, when he was in third grade, a teacher, recognizing Yun’s potential, explained to him that the repeating decimal 0.9999… is exactly equal to one. Yun puzzled over this unexpected fact for months. After that, he was hooked.
 
Yuan started out in the least auspicious circumstances of the four. He was born in a village close to Wuhan, a poor area with few resources for cultivating mathematical genius. But his teachers quickly noticed his talent.
 
“My math teachers liked me very much in first and second grade, and I could tell they were surprised by my ability,” he said. “Mainly that I got very high scores, usually perfect scores on exams.” Later, he enrolled in the prestigious Huanggang High School.
 
In China, as in other countries, there are structures in place that make it likely that top mathematical talents will eventually meet. Zhu and Zhang, who grew up 300 miles away from Chengdu, first met at a summer math camp after 10th grade. Yun and Yuan were both members of the Chinese national Math Olympiad team, a status that reflected their particular technical skill and prowess at solving problems.
 
In August 2000, the four were among 200 students in the entering class at Peking University. Many of their classmates were good at math, but most aspired to careers in practical fields like finance or computer science. By their junior year, their class had divided up according to interests, and Yuan, Zhang, Yun and Zhu found themselves placed together in a small group focused on pure math.
 
At that point the four became friends in the typical college way. They’d watch movies, go hiking, and play soccer and basketball together. Yuan, whom they all describe as the most athletic of the group, usually won. During this period, in class and in discussions they organized among themselves, the four also encountered for the first time some of the mathematical concepts, such as automorphic forms, that would later form the focus of their careers. And as they made their way into the world of higher mathematics, they realized they were all fascinated by the same kind of mathematical research.
 
“By the end of college it was pretty clear to me that the four of us shared a similar taste in mathematics,” Yun said. “That taste is structure-based mathematics. Instead of doing computation, all of us are interested in the big picture and finding interesting examples demonstrating general principles.”
 
Yuan was the first of the group to take this perspective to the United States. In 2003 he went to Columbia to work with Shou-Wu Zhang. He was drawn abroad by the feeling that in China, he wouldn’t be able to realize his potential as a mathematician.
 
“I somehow thought that the professors [with whom I interacted] at Peking University were not good enough, were not top mathematicians,” he said. “I wanted to come to the United States earlier just to see these great mathematicians.”
 
Yuan’s experience as a graduate student surpassed his expectations. It wasn’t just that he suddenly found himself attending conferences and colloquia with the brightest mathematicians in the world. It was also that as he observed these mathematicians up close, he gained a new appreciation for the immense potential in the discipline he’d chosen to pursue.
 
“In China, mathematicians were not that happy, like somehow they didn’t seem to enjoy math. They gave off the impression that math was hard and you needed to be cautious to choose math as your lifetime career,” he said. “Columbia was totally different. One important thing I saw there was happiness in math, motivation, optimism. These are the parts I didn’t see in China.”
 
A year later, Yuan’s friends followed him to graduate school in the United States: Zhu went to Berkeley, Yun to Princeton, and Zhang to Columbia. Zhang remembers that soon after arriving in the United States, he realized he’d miscalculated when he would receive his first stipend check and was going to run out of cash. Yuan, who’d had a year to figure out the intricacies of direct deposit, gave him some money to get by.
 
Even more crucially, Yuan helped Zhang get his bearings in the math department at Columbia. “He gave me more direct access in understanding what professors here studied,” Zhang said. Zhang was particularly attracted to Shou-Wu Zhang’s research. Shou-Wu Zhang, who later left Columbia for Princeton, worked simultaneously in number theory and arithmetic algebraic geometry. Wei Zhang was impressed by what he describes as Shou-Wu Zhang’s ability to “expose ideas directly without hiding them behind a lot of technical ideas.”
 
Eventually Wei Zhang decided to focus his dissertation research on L-functions, a central topic in modern number theory and one of the most interesting. In particular, he was interested in generalizing the Gross-Zagier formula, which applies to a certain subset of L-functions, to a much broader range of L-functions. This work, which presaged his most recent discovery with Yun, was closely related to Shou-Wu Zhang’s own research, but not confined by it. The freedom to chart one’s own mathematical path, even as a graduate student, is something Wei Zhang likely would not have found had he stayed in China.
 
“In the Chinese way, you 100 percent follow your teacher and do the problem that’s left from the teacher’s research area,” Shou-Wu Zhang said. “The American way is, you take teacher’s advice with some modification.”
 
At the same time that Wei Zhang was exploring L-functions, Yuan was finding his own way in number theory, and Yun and Zhu were establishing their research programs in algebraic geometry. During and after graduate school, the four stayed in regular contact. Their paths often crossed in the country’s math centers — in Cambridge, where Yun was a postdoctoral fellow at the Massachusetts Institute of Technology and Zhu was at Harvard, and at Princeton, where Yuan and Yun overlapped during the 2008-2009 academic year.
 
During that year at Princeton, Yuan and Yun met regularly and began to develop their collaborative style. In informal conversations, Yuan explained the intricacies of number theory to his geometer friend. They spoke in Mandarin and conversed easily; Yuan had a good understanding of what Yun knew and didn’t know, and Yun was able to ask questions, even simple ones, without fear of looking naive. “Because he was able to explain many things to me,” Yun said, “I didn’t find it very difficult, while before I found number theory too difficult for me.”
 
These conversations, along with the work of the 2010 Fields medalist Ngô Bảo Châu, helped Yun understand that many of the techniques he knew from algebraic geometry could be used to attack problems in number theory. This was the goal of the Langlands program, and it had been made apparent to Yun in a very direct way. Now all he needed was a question to address.
 
The Breakthrough
 
In December 2014, Zhang flew from New York to the West Coast, where he saw Yun and Yuan. The reason for the trip was a 60th-birthday conference at the Mathematical Sciences Research Institute in Berkeley for the Columbia mathematician Michael Harris, but Zhang also arrived with an idea he wanted to share with his friends. That idea had grown out of a conversation he’d had with Yun back in 2011. At that time, Yun had been thinking about work Zhang had done even earlier on a problem in the Langlands program known as the arithmetic fundamental lemma. Yun thought that some of those ideas could be combined with techniques from algebraic geometry, but he told Zhang he wasn’t sure if it was possible.
 
“I had some geometric idea which could be true, but I couldn’t make it precise because I was lacking some vision in number theory,” Yun said.  “I told Wei, Do you think this thing could be true? He wasn’t sure.”
 
They left the conversation there for several years. Then in 2014, Zhang realized that Yun’s intuition was correct, and he began to see what it would take to prove it. The problem at hand involved L-functions, which Zhang had studied in graduate school. L-functions have what’s known as a Taylor expansion, in which they can be expressed as a sum of increasing powers. In 1986 Benedict Gross and Don Zagier were able to calculate the first term in the series.
 
Although L-functions were initially purely objects of number theory, they can also have a geometric interpretation, and powerful techniques from algebraic geometry can be used to study them. Yun had guessed that every term in the Taylor expansion should have a geometric interpretation; Zhang was able to precisely define what such an interpretation would look like. Whereas Gross and Zagier (and the French mathematician Jean-Loup Waldspurger) had been able to obtain exact formulas for the first and second term in the expansion, the new work would show how to obtain a geometric formula for every term.
 
Zhang explained his thinking to Yun and Yuan at Yuan’s house. As he listened, Yun remembers thinking that Zhang’s ideas fit together so well, they had to be true.
 
“He had the vision for this sort of global picture that made what I had vaguely in my mind very precise,” Yun said. “I think I was really kind of astonished when he laid out the whole thing. It was so beautiful.”
 
After that night, it took Zhang and Yun about nine months to prove their ideas. By September of this year, they had an early draft of a paper and began to give informal talks on their efforts. By the end of November, they had a completed draft. Shou-Wu Zhang, who has seen the work, estimates they completed the work at least a year faster than Wei Zhang could have managed on his own — assuming, that is, that the approach would have even occurred to him.
 
The result still has to go through peer review, but it is already generating excitement in the math world. Among other implications, it opens a whole new window onto the famed Birch and Swinnerton-Dyer conjecture, which is one of the seven Millennium Prize Problems that carry a $1 million award for whoever solves them first.
 
But the effects of Zhang and Yun’s latest work go beyond math. Zhang and Yun met as teenagers, grew up with Zhu and Yuan across two continents, and came of age together as mathematicians. Now the benefits of the friendship are spilling over into the rest of the mathematical world.
 
“These four people have different styles and methodologies to attack problems, so combined together, it’s simply great,” says Shou-Wu Zhang.
 
Editor’s note: Benedict Gross is a member of Quanta Magazine’s advisory board.
 
Update on December 10, 2015: This article has been updated to include links to the new work.
 
This article was reprinted on Wired.com.


#25
Unity

Unity

    Information Organism

  • Members
  • PipPipPipPipPipPipPipPip
  • 2,437 posts

Nature inspires applied math project that seeks to improve traffic flow

 

June 13, 2016 by Adam Zewe

 

Tiny and industrious, ants are models of teamwork and efficiency. The picnic-wrecking insects could also teach city planners a thing or two about how to optimize the timing of traffic signals, according to students at the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS).

 

For their final project in "Advanced Optimization" (AM 221), taught by Yaron Singer, assistant professor of computer science, applied math concentrators Robert Chen, A.B. '17, and Alex Wang, A.B. '17, used a heuristic technique called "ant colony optimization" (ACO) to find the most efficient time settings for traffic signals on a grid of city streets.

"Efficiently controlling traffic light cycles could have major benefits for the public, since people spend more and more time sitting in traffic," Wang said. "Optimization could reduce wait times at intersections across an entire city, saving time and money for a lot of people."

 

Traffic systems have often been the target of different modeling techniques, but the inherent complexity of traffic flow, which becomes even more complicated when additional intersections and light cycles are added, makes it difficult to optimize mathematically, Chen explained.

 

So Wang and Chen took a page from Mother Nature's book. In ACO, simulated "ants" traverse a graph that maps all possible solutions to the problem—in this case, all possible light cycle combinations on a square grid of city streets.

"These simulated ants deposit pheromone chemicals on the paths that they take. They will deposit more pheromones on solutions that are better—more efficient—and this encourages more ants to follow that path and converge on the optimal solution," Wang explained.

 

The ants traverse all the edges and vertices of the graph, but they travel over the edges with the highest pheromone values more often. In this case, these well-traveled edges represented the optimal timings for each traffic light, and were more likely to be used in future iterations as the simulation progressed.

 

The large number of possible traffic light settings initially made this approach computationally unfeasible. Chen and Wang developed a novel solution: they adjusted the graph so that pairs of vertices shared the edges, reducing the total number possibilities that needed to be explored in each round of simulation. They further enhanced their study by taking into account the relative ranking of ants, as opposed to evaluating the ants on an absolute scale, which is typical in ACO. In each round of the simulation, how much pheromone the ants deposited compared to other ants determined the optimal solutions used in the next round of simulation.

 

"The adjustments we've made to ACO are actually reminiscent of several machine learning techniques, so it would be interesting to apply this ACO method to other problems that are generally tackled with machine learning and see if there are any differences," Chen said.

 

In the end, their modified ACO system led to a 7 percent decrease in mean travel time over standard baselines used in some cities. In practice, rather than simply timing each green light for an arbitrary 30 seconds, the ACO model might lead city planners to time a green light at one intersection for 20 seconds, while another would stay green for 40 seconds. With this theoretical model, the next step would be to apply the techniques to real-world city data to see how it works with a dynamic traffic system, said Wang.

 

For Wang and Chen, the biggest lesson they learned from the project—aside from the power of mathematics to solve real-world problems—is that experimentation doesn't always lead to expected results. At the start of their research, they thought the ACO method would be too complex to effectively solve their problem.

 

"So often in research projects, the intuitive methods are the ones that perform better, but you never truly know what you are going to find," said Chen. "Whatever avenues you start exploring, it is critical to put your assumptions aside and consider your next steps based on what you see from the results."



Read more at: http://phys.org/news...raffic.html#jCp


#26
Infinite

Infinite

    Member

  • Members
  • PipPipPipPipPipPip
  • 649 posts
  • LocationDublin, Ireland



http://www.scienceal...t-maths-forever

Is minic an fhírinne searbh.


#27
Unity

Unity

    Information Organism

  • Members
  • PipPipPipPipPipPipPipPip
  • 2,437 posts

Mathematical framework offers a more detailed understanding of network relationships

 

socialnetwork.jpg

 

(Phys.org)—A trio of math and computer scientists has developed a means for developing generalized frameworks that allow for clustering networks based on higher-order connectivity patterns. In their paper published in the journal Science, Austin Benson and Jure Leskovec with Stanford University and David Gleich with Purdue University outline their framework ideas and offer real life examples of ways their techniques can be applied to help understand complex networks in simpler ways. Nataša Pržulj and Noël Malod-Dognin with University College London offer an analysis of the work done by the trio in a Perspectives piece in the same journal issue.
 
As the authors note, it is not difficult to make out patterns in very small networks, a person trying to do so need only watch the system at work for a period of time. It is when networks become bigger and more complex that they become unwieldy. Even in such cases, however, low-order patterns are often still easy to discern—counting nodes or edges for example, offers some degree of network size, though doing so tells you very little about what the network does and how—that is where high-order organizational principles come into play. Unfortunately attempts to create a means for providing more information or detail about such systems has to date, not met with much success. In this new effort, the researchers describe a framework they have developed that offers some of the pattern recognition seen in smaller networks, with more complex networks.
 
They start, Pržulj and Malod-Dognin note, with one of the more common higher-order structures known as small network subgraphs, which they refer to as network motifs—those that are statistically significant can be used as building blocks for the building of a mathematical framework, which is of course what the researchers have done. Relationship identification among the motifs was done by applying clustering algorithms. The result is a framework that highlights and/or identifies which of the motifs are the most critical when a network is in operation.
 
The trio tested their framework technique by using it to analyze part of the neuronal network of a roundworm, and report that it revealed the particular cluster of 20 neurons responsible for performing actions such as standing and wiggling its head. They also gained insights into air traffic patterns by using it to perform an analysis of airports in the U.S. and Canada. They suggest such frameworks may be used in a wide variety of applications.
 
 Explore further: Getting inside the control mechanisms of complex systems
 
More information: A. R. Benson et al. Higher-order organization of complex networks, Science (2016). DOI: 10.1126/science.aad9029
 
Abstract 
 
Networks are a fundamental tool for understanding and modeling complex systems in physics, biology, neuroscience, engineering, and social science. Many networks are known to exhibit rich, lower-order connectivity patterns that can be captured at the level of individual nodes and edges. However, higher-order organization of complex networks—at the level of small network subgraphs—remains largely unknown. Here, we develop a generalized framework for clustering networks on the basis of higher-order connectivity patterns. This framework provides mathematical guarantees on the optimality of obtained clusters and scales to networks with billions of edges. The framework reveals higher-order organization in a number of networks, including information propagation units in neuronal networks and hub structure in transportation networks. Results show that networks exhibit rich higher-order organizational structures that are exposed by clustering based on higher-order connectivity patterns. 
 
Journal reference: Science
 
 


#28
Unity

Unity

    Information Organism

  • Members
  • PipPipPipPipPipPipPipPip
  • 2,437 posts

Mathematical model gives insights into cotranscriptional folding of RNA

 

Researchers propose "Oritatami" as a mathematical model of the cotranscriptional folding of RNA in order to widen applications of RNA origami—one of the most significant experimental breakthroughs in molecular self-assembly.
RNA origami by Geary, Rothemund, and Andersen is an experimental architecture of nanoscale rectangular tiles that self-assemble from an RNA sequence being folded cotranscriptionally, as shown in Figure 1. Theoretical models for programming this kind of molecular self-assembly are needed, as Winfree's abstract tile assembly model (aTAM) has yielded tremendous successes in experimental DNA tile self-assembly, which self-assemble a structure by letting unit DNA tiles attach to each other in some pre-programmed manner.
With Geary, now Shinnosuke Seki at the University of Electro-communications and his colleagues Pierre-Etienne Meunier and Nicolas Schabanel in Finland and France have proposed Oritatami to help understand the nature of cotranscriptional folding, the way that cells control RNA folding in vivo.
 
With this work, rather than simply predicting most likely conformations of RNA, they can implement computational devices out of RNA that take advantage of sequential folding to do something practical, such as count. The binary counter proof implements one of the most important types of device used in technology as a molecular self-assembly. The authors design an oritatami binary counter (see Figure 2), which suggests a way to use cotranscriptional folding for biomolecules to count in vivo. They also propose a fixed-parameter-tractable (FPT) algorithm to facilitate the design process of oritatami.
 
577cd2374aada.jpg
 
Oritatami binary counter: It cotranscriptionally folds a periodic sequence 0-1-2- … - 59-0-1-2- … of 60 abstract molecule (bead) types. (Top) The initial count 000 is encoded as a sequence of four bead types. (Bottom) One zig-zag amounts to …more
 
Transcription
 
Transcription is the first step of gene expression, in which a DNA template sequence, over A, C, G, T, is copied to an RNA transcript sequence, over A, C, G, U, called a messenger RNA (mRNA). An enzyme called RNA polymerase scans the DNA template and copies it letter by letter as A -> U, C -> G, G -> C, and T -> A.
 
Cotranscriptional folding
 
An RNA sequence tends to fold rapidly upon itself to take the most stable conformation, and hence, the RNA transcript already begins to fold while it is still being produced. What characterizes the folding of RNA transcripts is that transcripts fold in a continuous process while being transcribed. This means that the folding is controlled by the rate of strand production. This way of folding is hence called cotranscriptional or kinetic folding. In this way of folding, locally-stable structures of RNA will be preferred over some folds with better stability because they would require first unfolding parts of the strand in order to form.
 
 Explore further: Prediction of RNA pseudoknots using heuristic modeling with mapping and sequential folding
 
Provided by: University of Electro-communications
 
 
/* Wow this is really good work.  Not only can it be used to gain a better understanding of rna transcription, but it can also be used to simulate it computationally or perform mathematical operations using biomolecules */


#29
Unity

Unity

    Information Organism

  • Members
  • PipPipPipPipPipPipPipPip
  • 2,437 posts
At the Far Ends of a New Universal Law
 
A potent theory has emerged explaining a mysterious statistical law that arises throughout physics and mathematics.
 
MagicCircle_Anim_13_40fps.gif
 
By Natalie Wolchover
 
October 15, 2014
 
Imagine an archipelago where each island hosts a single tortoise species and all the islands are connected — say by rafts of flotsam. As the tortoises interact by dipping into one another’s food supplies, their populations fluctuate.
 
In 1972, the biologist Robert May devised a simple mathematical model that worked much like the archipelago. He wanted to figure out whether a complex ecosystem can ever be stable or whether interactions between species inevitably lead some to wipe out others. By indexing chance interactions between species as random numbers in a matrix, he calculated the critical “interaction strength” — a measure of the number of flotsam rafts, for example — needed to destabilize the ecosystem. Below this critical point, all species maintained steady populations. Above it, the populations shot toward zero or infinity.
 
Little did May know, the tipping point he discovered was one of the first glimpses of a curiously pervasive statistical law.
 
The law appeared in full form two decades later, when the mathematicians Craig Tracy and Harold Widom proved that the critical point in the kind of model May used was the peak of a statistical distribution. Then, in 1999, Jinho Baik, Percy Deift and Kurt Johansson discovered that the same statistical distribution also describes variations in sequences of shuffled integers — a completely unrelated mathematical abstraction. Soon the distribution appeared in models of the wriggling perimeter of a bacterial colony and other kinds of random growth. Before long, it was showing up all over physics and mathematics.
 
TW_Large.jpg
 
Harold Widom, left, and Craig Tracy pictured in 2009 at the Oberwolfach Research Institute for Mathematics in Germany.
 
“The big question was why,” said Satya Majumdar, a statistical physicist at the University of Paris-Sud. “Why does it pop up everywhere?”
 
Systems of many interacting components — be they species, integers or subatomic particles — kept producing the same statistical curve, which had become known as the Tracy-Widom distribution. This puzzling curve seemed to be the complex cousin of the familiar bell curve, or Gaussian distribution, which represents the natural variation of independent random variables like the heights of students in a classroom or their test scores. Like the Gaussian, the Tracy-Widom distribution exhibits “universality,” a mysterious phenomenon in which diverse microscopic effects give rise to the same collective behavior. “The surprise is it’s as universal as it is,” said Tracy, a professor at the University of California, Davis.
 
When uncovered, universal laws like the Tracy-Widom distribution enable researchers to accurately model complex systems whose inner workings they know little about, like financial markets, exotic phases of matter or the Internet.
 
“It’s not obvious that you could have a deep understanding of a very complicated system using a simple model with just a few ingredients,” said Grégory Schehr, a statistical physicist who works with Majumdar at Paris-Sud. “Universality is the reason why theoretical physics is so successful.”
 
Universality is “an intriguing mystery,” said Terence Tao, a mathematician at the University of California, Los Angeles who won the prestigious Fields Medal in 2006. Why do certain laws seem to emerge from complex systems, he asked, “almost regardless of the underlying mechanisms driving those systems at the microscopic level?”
 
Now, through the efforts of researchers like Majumdar and Schehr, a surprising explanation for the ubiquitous Tracy-Widom distribution is beginning to emerge.
 
Lopsided Curve
 
The Tracy-Widom distribution is an asymmetrical statistical bump, steeper on the left side than the right. Suitably scaled, its summit sits at a telltale value: √2N, the square root of twice the number of variables in the systems that give rise to it and the exact transition point between stability and instability that May calculated for his model ecosystem.
 
The transition point corresponded to a property of his matrix model called the “largest eigenvalue”: the greatest in a series of numbers calculated from the matrix’s rows and columns. Researchers had already discovered that the N eigenvalues of a “random matrix” — one filled with random numbers — tend to space apart along the real number line according to a distinct pattern, with the largest eigenvalue typically located at or near √2N. Tracy and Widom determined how the largest eigenvalues of random matrices fluctuate around this average value, piling up into the lopsided statistical distribution that bears their names.
 
TWGraph600.jpg
 
When the Tracy-Widom distribution turned up in the integer sequences problem and other contexts that had nothing to do with random matrix theory, researchers began searching for the hidden thread tying all its manifestations together, just as mathematicians in the 18th and 19th centuries sought a theorem that would explain the ubiquity of the bell-shaped Gaussian distribution.
 
The central limit theorem, which was finally made rigorous about a century ago, certifies that test scores and other “uncorrelated” variables — meaning any of them can change without affecting the rest — will form a bell curve. By contrast, the Tracy-Widom curve appears to arise from variables that are strongly correlated, such as interacting species, stock prices and matrix eigenvalues. The feedback loop of mutual effects between correlated variables makes their collective behavior more complicated than that of uncorrelated variables like test scores. While researchers have rigorously proved certain classes of random matrices in which the Tracy-Widom distribution universally holds, they have a looser handle on its manifestations in counting problems, random-walk problems, growth models and beyond.
 
“No one really knows what you need in order to get Tracy-Widom,” said Herbert Spohn, a mathematical physicist at the Technical University of Munich in Germany. “The best we can do,” he said, is to gradually uncover the range of its universality by tweaking systems that exhibit the distribution and seeing whether the variants give rise to it too.
 
So far, researchers have characterized three forms of the Tracy-Widom distribution: rescaled versions of one another that describe strongly correlated systems with different types of inherent randomness. But there could be many more than three, perhaps even an infinite number, of Tracy-Widom universality classes. “The big goal is to find the scope of universality of the Tracy-Widom distribution,” said Baik, a professor of mathematics at the University of Michigan. “How many distributions are there? Which cases give rise to which ones?”
 
As other researchers identified further examples of the Tracy-Widom peak, Majumdar, Schehr and their collaborators began hunting for clues in the curve’s left and right tails.
 
Going Through a Phase
 
Majumdar became interested in the problem in 2006 during a workshop at the University of Cambridge in England. He met a pair of physicists who were using random matrices to model string theory’s abstract space of all possible universes. The string theorists reasoned that stable points in this “landscape” corresponded to the subset of random matrices whose largest eigenvalues were negative — far to the left of the average value of √2N at the peak of the Tracy-Widom curve. They wondered just how rare these stable points — the seeds of viable universes — might be.
 
To answer the question, Majumdar and David Dean, now of the University of Bordeaux in France, realized that they needed to derive an equation describing the tail to the extreme left of the Tracy-Widom peak, a region of the statistical distribution that had never been studied. Within a year, their derivation of the left “large deviation function” appeared in Physical Review Letters. Using different techniques, Majumdar and Massimo Vergassola of Pasteur Institute in Paris calculated the right large deviation function three years later. On the right, Majumdar and Dean were surprised to find that the distribution dropped off at a rate related to the number of eigenvalues, N; on the left, it tapered off more quickly, as a function of N2.
 
In 2011, the form of the left and right tails gave Majumdar, Schehr and Peter Forrester of the University of Melbourne in Australia a flash of insight: They realized the universality of the Tracy-Widom distribution could be related to the universality of phase transitions — events such as water freezing into ice, graphite becoming diamond and ordinary metals transforming into strange superconductors.
 
Because phase transitions are so widespread — all substances change phases when fed or starved of sufficient energy — and take only a handful of mathematical forms, they are for statistical physicists “almost like a religion,” Majumdar said.
 
In the miniscule margins of the Tracy-Widom distribution, Majumdar, Schehr and Forrester recognized familiar mathematical forms: distinct curves describing two different rates of change in the properties of a system, sloping downward from either side of a transitional peak. These were the trappings of a phase transition.
 
Satya_Gregory.jpg
 
Satya Majumdar, left, and Grégory Schehr at the University of Paris-Sud.
 
In the thermodynamic equations describing water, the curve that represents the water’s energy as a function of temperature has a kink at 100 degrees Celsius, the point at which the liquid becomes steam. The water’s energy slowly increases up to this point, suddenly jumps to a new level and then slowly increases again along a different curve, in the form of steam. Crucially, where the energy curve has a kink, the “first derivative” of the curve — another curve that shows how quickly the energy changes at each point — has a peak.
 
Similarly, the physicists realized, the energy curves of certain strongly correlated systems have a kink at √2N. The associated peak for these systems is the Tracy-Widom distribution, which appears in the third derivative of the energy curve — that is, the rate of change of the rate of change of the energy’s rate of change. This makes the Tracy-Widom distribution a “third-order” phase transition.
 
“The fact that it pops up everywhere is related to the universal character of phase transitions,” Schehr said. “This phase transition is universal in the sense that it does not depend too much on the microscopic details of your system.”
 
According to the form of the tails, the phase transition separated phases of systems whose energy scaled with N2 on the left and N on the right. But Majumdar and Schehr wondered what characterized this Tracy-Widom universality class; why did third-order phase transitions always seem to occur in systems of correlated variables?
 
The answer lay buried in a pair of esoteric papers from 1980. A third-order phase transition had shown up before, identified that year in a simplified version of the theory governing atomic nuclei. The theoretical physicists David Gross, Edward Witten and (independently) Spenta Wadia discovered a third-order phase transition separating a “weak coupling” phase, in which matter takes the form of nuclear particles, and a higher-temperature “strong coupling” phase, in which matter melds into plasma. After the Big Bang, the universe probably transitioned from a strong- to a weak-coupling phase as it cooled.
 
After examining the literature, Schehr said, he and Majumdar “realized there was a deep connection between our probability problem and this third-order phase transition that people had found in a completely different context.”
 
Weak to Strong
 
Majumdar and Schehr have since accrued substantial evidence that the Tracy-Widom distribution and its large deviation tails represent a universal phase transition between weak- and strong-coupling phases. In May’s ecosystem model, for example, the critical point at √2N separates a stable phase of weakly coupled species, whose populations can fluctuate individually without affecting the rest, from an unstable phase of strongly coupled species, in which fluctuations cascade through the ecosystem and throw it off balance. In general, Majumdar and Schehr believe, systems in the Tracy-Widom universality class exhibit one phase in which all components act in concert and another phase in which the components act alone.
 
The asymmetry of the statistical curve reflects the nature of the two phases. Because of mutual interactions between the components, the energy of the system in the strong-coupling phase on the left is proportional to N2. Meanwhile, in the weak-coupling phase on the right, the energy depends only on the number of individual components, N.
 
“Whenever you have a strongly coupled phase and a weakly coupled phase, Tracy-Widom is the connecting crossover function between the two phases,” Majumdar said.
 
Majumdar and Schehr’s work is “a very nice contribution,” said Pierre Le Doussal, a physicist at École Normale Supérieure in France who helped prove the presence of the Tracy-Widom distribution in a stochastic growth model called the KPZ equation. Rather than focusing on the peak of the Tracy-Widom distribution, “the phase transition is probably the deeper level” of explanation, Le Doussal said. “It should basically make us think more about trying to classify these third-order transitions.”
 
Leo Kadanoff, the statistical physicist who introduced the term “universality” and helped classify universal phase transitions in the 1960s, said it has long been clear to him that universality in random matrix theory must somehow be connected to the universality of phase transitions. But while the physical equations describing phase transitions seem to match reality, many of the computational methods used to derive them have never been made mathematically rigorous.
 
“Physicists will, in a pinch, settle for a comparison with nature,” Kadanoff said, “Mathematicians want proofs — proof that phase-transition theory is correct; more detailed proofs that random matrices fall into the universality class of third-order phase transitions; proof that such a class exists.”
 
For the physicists involved, a preponderance of evidence will suffice. The task now is to identify and characterize strong- and weak-coupling phases in more of the systems that exhibit the Tracy-Widom distribution, such as growth models, and to predict and study new examples of Tracy-Widom universality throughout nature.
 
The telltale sign will be the tails of the statistical curves. At a gathering of experts in Kyoto, Japan, in August, Le Doussal encountered Kazumasa Takeuchi, a University of Tokyo physicist who reported in 2010 that the interface between two phases of a liquid crystal material varies according to the Tracy-Widom distribution. Four years ago, Takeuchi had not collected enough data to plot extreme statistical outliers, such as prominent spikes along the interface. But when Le Doussal entreated Takeuchi to plot the data again, the scientists saw the first glimpse of the left and right tails. Le Doussal immediately emailed Majumdar with the news.
 
“Everybody looks only at the Tracy-Widom peak,” Majumdar said. “They don’t look at the tails because they are very, very tiny things.”
 
Correction: This article was revised on October 17, 2014, to clarify that Satya Majumdar collaborated with Massimo Vergassola to compute the right large deviation function, and to reflect that the insight by Forrester, Majumdar and Schehr occurred in 2011, not 2009 as originally stated.
 


#30
Unity

Unity

    Information Organism

  • Members
  • PipPipPipPipPipPipPipPip
  • 2,437 posts
A Unified Theory of Randomness

Researchers have uncovered deep connections among different types of random objects, illuminating hidden geometric structures.

 

RandomShape_Lead04.gif

 

Randomness increases in a structure known as an “SLE curve.”

 

By Kevin Hartnett

 

August 2, 2016

 

Standard geometric objects can be described by simple rules — every straight line, for example, is just y = ax + b — and they stand in neat relation to each other: Connect two points to make a line, connect four line segments to make a square, connect six squares to make a cube.

 

These are not the kinds of objects that concern Scott Sheffield. Sheffield, a professor of mathematics at the Massachusetts Institute of Technology, studies shapes that are constructed by random processes. No two of them are ever exactly alike. Consider the most familiar random shape, the random walk, which shows up everywhere from the movement of financial asset prices to the path of particles in quantum physics. These walks are described as random because no knowledge of the path up to a given point can allow you to predict where it will go next.

 

Beyond the one-dimensional random walk, there are many other kinds of random shapes. There are varieties of random paths, random two-dimensional surfaces, random growth models that approximate, for example, the way a lichen spreads on a rock. All of these shapes emerge naturally in the physical world, yet until recently they’ve existed beyond the boundaries of rigorous mathematical thought. Given a large collection of random paths or random two-dimensional shapes, mathematicians would have been at a loss to say much about what these random objects shared in common.

 

Yet in work over the past few years, Sheffield and his frequent collaborator, Jason Miller, a professor at the University of Cambridge, have shown that these random shapes can be categorized into various classes, that these classes have distinct properties of their own, and that some kinds of random objects have surprisingly clear connections with other kinds of random objects. Their work forms the beginning of a unified theory of geometric randomness.

 

“You take the most natural objects — trees, paths, surfaces — and you show they’re all related to each other,” Sheffield said. “And once you have these relationships, you can prove all sorts of new theorems you couldn’t prove before.”

 

In the coming months, Sheffield and Miller will publish the final part of a three-paper series that for the first time provides a comprehensive view of random two-dimensional surfaces — an achievement not unlike the Euclidean mapping of the plane.

 

“Scott and Jason have been able to implement natural ideas and not be rolled over by technical details,” said Wendelin Werner, a professor at ETH Zurich and winner of the Fields Medal in 2006 for his work in probability theory and statistical physics. “They have been basically able to push for results that looked out of reach using other approaches.”

 

A Random Walk on a Quantum String

 

In standard Euclidean geometry, objects of interest include lines, rays, and smooth curves like circles and parabolas. The coordinate values of the points in these shapes follow clear, ordered patterns that can be described by functions. If you know the value of two points on a line, for instance, you know the values of all other points on the line. The same is true for the values of the points on each of the rays in this first image, which begin at a point and radiate outward.

 

spokes0.jpg

 

One way to begin to picture what random two-dimensional geometries look like is to think about airplanes. When an airplane flies a long-distance route, like the route from Tokyo to New York, the pilot flies in a straight line from one city to the other. Yet if you plot the route on a map, the line appears to be curved. The curve is a consequence of mapping a straight line on a sphere (Earth) onto a flat piece of paper.

 

If Earth were not round, but were instead a more complicated shape, possibly curved in wild and random ways, then an airplane’s trajectory (as shown on a flat two-dimensional map) would appear even more irregular, like the rays in the following images.

 

spokes1.jpg

 

Each ray represents the trajectory an airplane would take if it started from the origin and tried to fly as straight as possible over a randomly fluctuating geometric surface. The amount of randomness that characterizes the surface is dialed up in the next images — as the randomness increases, the straight rays wobble and distort, turn into increasingly jagged bolts of lightning, and become nearly incoherent.

 

spokes2.jpg

 

 

Yet incoherent is not the same as incomprehensible. In a random geometry, if you know the location of some points, you can (at best) assign probabilities to the location of subsequent points. And just like a loaded set of dice is still random, but random in a different way than a fair set of dice, it’s possible to have different probability measures for generating the coordinate values of points on random surfaces.

 

spokes3.jpg

 

What mathematicians have found — and hope to continue to find — is that certain probability measures on random geometries are special, and tend to arise in many different contexts. It is as though nature has an inclination to generate its random surfaces using a very particular kind of die (one with an uncountably infinite number of sides). Mathematicians like Sheffield and Miller work to understand the properties of these dice (and the “typical” properties of the shapes they produce) just as precisely as mathematicians understand the ordinary sphere.

 

The first kind of random shape to be understood in this way was the random walk. Conceptually, a one-dimensional random walk is the kind of path you’d get if you repeatedly flipped a coin and walked one way for heads and the other way for tails. In the real world, this type of movement first came to attention in 1827 when the English botanist Robert Brown observed the random movements of pollen grains suspended in water. The seemingly random motion was caused by individual water molecules bumping into each pollen grain. Later, in the 1920s, Norbert Wiener of MIT gave a precise mathematical description of this process, which is called Brownian motion.

 

Brownian motion is the “scaling limit” of random walks — if you consider a random walk where each step size is very small, and the amount of time between steps is also very small, these random paths look more and more like Brownian motion. It’s the shape that almost all random walks converge to over time.

 

Two-dimensional random spaces, in contrast, first preoccupied physicists as they tried to understand the structure of the universe.

 

In string theory, one considers tiny strings that wiggle and evolve in time. Just as the time trajectory of a point can be plotted as a one-dimensional curve, the time trajectory of a string can be understood as a two-dimensional curve. This curve, called a worldsheet, encodes the history of the one-dimensional string as it wriggles through time.

 

“To make sense of quantum physics for strings,” said Sheffield, “you want to have something like Brownian motion for surfaces.”

 

For years, physicists have had something like that, at least in part. In the 1980s, physicist Alexander Polyakov, who’s now at Princeton University, came up with a way of describing these surfaces that came to be called Liouville quantum gravity (LQG). It provided an incomplete but still useful view of random two-dimensional surfaces. In particular, it gave physicists a way of defining a surface’s angles so that they could calculate the surface area.

 

In parallel, another model, called the Brownian map, provided a different way to study random two-dimensional surfaces. Where LQG facilitates calculations about area, the Brownian map has a structure that allows researchers to calculate distances between points. Together, the Brownian map and LQG gave physicists and mathematicians two complementary perspectives on what they hoped were fundamentally the same object. But they couldn’t prove that LQG and the Brownian map were in fact compatible with each other.

 

“It was this weird situation where there were two models for what you’d call the most canonical random surface, two competing random surface models, that came with different information associated with them,” said Sheffield.

 

Beginning in 2013, Sheffield and Miller set out to prove that these two models described fundamentally the same thing.

 

The Problem With Random Growth

 

Sheffield and Miller began collaborating thanks to a kind of dare. As a graduate student at Stanford in the early 2000s, Sheffield worked under Amir Dembo, a probability theorist. In his dissertation, Sheffield formulated a problem having to do with finding order in a complicated set of surfaces. He posed the question as a thought exercise as much as anything else.

 

“I thought this would be a problem that would be very hard and take 200 pages to solve and probably nobody would ever do it,” Sheffield said.

 

But along came Miller. In 2006, a few years after Sheffield had graduated, Miller enrolled at Stanford and also started studying under Dembo, who assigned him to work on Sheffield’s problem as way of getting to know random processes. “Jason managed to solve this, I was impressed, we started working on some things together, and eventually we had a chance to hire him at MIT as a postdoc,” Sheffield said.

 

In order to show that LQG and the Brownian map were equivalent models of a random two-dimensional surface, Sheffield and Miller adopted an approach that was simple enough conceptually. They decided to see if they could invent a way to measure distance on LQG surfaces and then show that this new distance measurement was the same as the distance measurement that came packaged with the Brownian map.

 

To do this, Sheffield and Miller thought about devising a mathematical ruler that could be used to measure distance on LQG surfaces. Yet they immediately realized that ordinary rulers would not fit nicely into these random surfaces — the space is so wild that one cannot move a straight object around without the object getting torn apart.

 

The duo forgot about rulers. Instead, they tried to reinterpret the distance question as a question about growth. To see how this works, imagine a bacterial colony growing on some surface. At first it occupies a single point, but as time goes on it expands in all directions. If you wanted to measure the distance between two points, one (seemingly roundabout) way of doing that would be to start a bacterial colony at one point and measure how much time it took the colony to encompass the other point. Sheffield said that the trick is to somehow “describe this process of gradually growing a ball.”

 

It’s easy to describe how a ball grows in the ordinary plane, where all points are known and fixed and growth is deterministic. Random growth is far harder to describe and has long vexed mathematicians. Yet as Sheffield and Miller were soon to learn, “[random growth] becomes easier to understand on a random surface than on a smooth surface,” said Sheffield. The randomness in the growth model speaks, in a sense, the same language as the randomness on the surface on which the growth model proceeds. “You add a crazy growth model on a crazy surface, but somehow in some ways it actually makes your life better,” he said.

 

The following images show a specific random growth model, the Eden model, which describes the random growth of bacterial colonies. The colonies grow through the addition of randomly placed clusters along their boundaries. At any given point in time, it’s impossible to know for sure where on the boundary the next cluster will appear. In these images, Miller and Sheffield show how Eden growth proceeds over a random two-dimensional surface.

 

The first image shows Eden growth on a fairly flat — that is, not especially random — LQG surface. The growth proceeds in an orderly way, forming nearly concentric circles that have been color-coded to indicate the time at which growth occurs at different points on the surface.

 

LQG_0025.jpg

 

In subsequent images, Sheffield and Miller illustrate growth on surfaces of increasingly greater randomness. The amount of randomness in the function that produces the surfaces is controlled by a constant, gamma. As gamma increases, the surface gets rougher — with higher peaks and lower valleys — and random growth on that surface similarly takes on a less orderly form. In the previous image, gamma is 0.25. In the next image, gamma is set to 1.25, introducing five times as much randomness into the construction of the surface. Eden growth across this uncertain surface is similarly distorted.

 

LQG_0125.jpg

 

When gamma is set to the square root of eight-thirds (approximately 1.63), LQG surfaces fluctuate even more dramatically. They also take on a roughness that matches the roughness of the Brownian map, which allows for more direct comparisons between these two models of a random geometric surface.

 

LQG_0163.jpg

 

Random growth on such a rough surface proceeds in a very irregular way. Describing it mathematically is like trying to anticipate minute pressure fluctuations in a hurricane. Yet Sheffield and Miller realized that they needed to figure out how to model Eden growth on very random LQG surfaces in order to establish a distance structure equivalent to the one on the (very random) Brownian map.

 

“Figuring out how to mathematically make [random growth] rigorous is a huge stumbling block,” said Sheffield, noting that Martin Hairer of the University of Warwick won the Fields Medal in 2014 for work that overcame just these kinds of obstacles. “You always need some kind of amazing clever trick to do it.”

 

Random Exploration

 

Sheffield and Miller’s clever trick is based on a special type of random one-dimensional curve that is similar to the random walk except that it never crosses itself. Physicists had encountered these kinds of curves for a long time in situations where, for instance, they were studying the boundary between clusters of particles with positive and negative spin (the boundary line between the clusters of particles is a one-dimensional path that never crosses itself and takes shape randomly). They knew these kinds of random, noncrossing paths occurred in nature, just as Robert Brown had observed that random crossing paths occurred in nature, but they didn’t know how to think about them in any kind of precise way. In 1999 Oded Schramm, who at the time was at Microsoft Research in Redmond, Washington, introduced the SLE curve (for Schramm-Loewner evolution) as the canonical noncrossing random curve.

 

SLE_rays2.jpg

 

Schramm’s work on SLE curves was a landmark in the study of random objects. It’s widely acknowledged that Schramm, who died in a hiking accident in 2008, would have won the Fields Medal had he been a few weeks younger at the time he’d published his results. (The Fields Medal can be given only to mathematicians who are not yet 40.) As it was, two people who worked with him built on his work and went on to win the prize: Wendelin Werner in 2006 and Stanislav Smirnov in 2010. More fundamentally, the discovery of SLE curves made it possible to prove many other things about random objects.

 

“As a result of Schramm’s work, there were a lot of things in physics they’d known to be true in their physics way that suddenly entered the realm of things we could prove mathematically,” said Sheffield, who was a friend and collaborator of Schramm’s.

 

For Miller and Sheffield, SLE curves turned out to be valuable in an unexpected way. In order to measure distance on LQG surfaces, and thus show that LQG surfaces and the Brownian map were the same, they needed to find some way to model random growth on a random surface. SLE proved to be the way.

“The ‘aha’ moment was [when we realized] you can construct [random growth] using SLEs and that there is a connection between SLEs and LQG,” said Miller.

SLE curves come with a constant, kappa, which plays a similar role to the one gamma plays for LQG surfaces. Where gamma describes the roughness of an LQG surface, kappa describes the “windiness” of SLE curves. When kappa is low, the curves look like straight lines. As kappa increases, more randomness is introduced into the function that constructs the curves and the curves turn more unruly, while obeying the rule that they can bounce off of, but never cross, themselves. Here is an SLE curve with kappa equal to 0.5, followed by an SLE curve with kappa equal to 3.

 

SLE_05.jpg

 

Sheffield and Miller noticed that when they dialed the value of kappa to 6 and gamma up to the square root of eight-thirds, an SLE curve drawn on the random surface followed a kind of exploration process. Thanks to works by Schramm and by Smirnov, Sheffield and Miller knew that when kappa equals 6, SLE curves follow the trajectory of a kind of “blind explorer” who marks her path by constructing a trail as she goes. She moves as randomly as possible except that whenever she bumps into a piece of the path she has already followed, she turns away from that piece to avoid crossing her own path or getting stuck in a dead end.

 

SLE_03.jpg

 

“[The explorer] finds that each time her path hits itself, it cuts off a little piece of land that is completely surrounded by the path and can never be visited again,” said Sheffield.

 

Sheffield and Miller then considered a bacterial growth model, the Eden model, that had a similar effect as it advanced across a random surface: It grew in a way that “pinched off” a plot of terrain that, afterward, it never visited again. The plots of terrain cut off by the growing bacteria colony looked exactly the same as the plots of terrain cut off by the blind explorer. Moreover, the information possessed by a blind explorer at any time about the outer unexplored region of the random surface was exactly the same as the information possessed by a bacterial colony. The only difference between the two was that while the bacterial colony grew from all points on its outer boundary at once, the blind explorer’s SLE path could grow only from the tip.

 

In a paper posted online in 2013, Sheffield and Miller imagined what would happen if, every few minutes, the blind explorer were magically transported to a random new location on the boundary of the territory she had already visited. By moving all around the boundary, she would be effectively growing her path from all boundary points at once, much like the bacterial colony. Thus they were able to take something they could understand — how an SLE curve proceeds on a random surface — and show that with some special configuring, the curve’s evolution exactly described a process they hadn’t been able to understand, random growth. “There’s something special about the relationship between SLE and growth,” said Sheffield. “That was kind of the miracle that made everything possible.”

 

The distance structure imposed on LQG surfaces through the precise understanding of how random growth behaves on those surfaces exactly matched the distance structure on the Brownian map. As a result, Sheffield and Miller merged two distinct models of random two-dimensional shapes into one coherent, mathematically understood fundamental object.

 

Turning Randomness Into a Tool

 

Sheffield and Miller have already posted the first two papers in their proof of the equivalence between LQG and the Brownian map on the scientific preprint site arxiv.org; they intend to post the third and final paper later this summer. The work turned on the ability to reason across different random shapes and processes — to see how random noncrossing curves, random growth, and random two-dimensional surfaces relate to one another. It’s an example of the increasingly sophisticated results that are possible in the study of random geometry.

 

“It’s like you’re in a mountain with three different caves. One has iron, one has gold, one has copper — suddenly you find a way to link all three of these caves together,” said Sheffield. “Now you have all these different elements you can build things with and can combine them to produce all sorts of things you couldn’t build before.”

 

Many open questions remain, including determining whether the relationship between SLE curves, random growth models, and distance measurements holds up in less-rough versions of LQG surfaces than the one used in the current paper. In practical terms, the results by Sheffield and Miller can be used to describe the random growth of real phenomena like snowflakes, mineral deposits, and dendrites in caves, but only when that growth takes place in the imagined world of random surfaces. It remains to be seen whether their methods can be applied to ordinary Euclidean space, like the space we live in.

 

This article was reprinted on Wired.com

 

https://www.quantama..._of_randomness/



#31
Unity

Unity

    Information Organism

  • Members
  • PipPipPipPipPipPipPipPip
  • 2,437 posts

Moonshine Master Toys With String Theory

 

The physicist-mathematician Miranda Cheng is working to harness a mysterious connection between string theory, algebra and number theory

 

Miranda_C_Lead_1000x620.jpg

 

 

After the Eyjafjallajökull volcano erupted in Iceland in 2010, flight cancellations left Miranda Cheng stranded in Paris. While waiting for the ash to clear, Cheng, then a postdoctoral researcher at Harvard University studying string theory, got to thinking about a paper that had recently been posted online. Its three coauthors had pointed out a numerical coincidence connecting far-flung mathematical objects. “That smells like another moonshine,” Cheng recalled thinking. “Could it be another moonshine?”

 

She happened to have read a book about the “monstrous moonshine,” a mathematical structure that unfolded out of a similar bit of numerology: In the late 1970s, the mathematician John McKay noticed that 196,884, the first important coefficient of an object called the j-function, was the sum of one and 196,883, the first two dimensions in which a giant collection of symmetries called the monster group could be represented. By 1992, researchers had traced this farfetched (hence “moonshine”) correspondence to its unlikely source: string theory, a candidate for the fundamental theory of physics that casts elementary particles as tiny oscillating strings. The j-function describes the strings’ oscillations in a particular string theory model, and the monster group captures the symmetries of the space-time fabric that these strings inhabit.

 

By the time of Eyjafjallajökull’s eruption, “this was ancient stuff,” Cheng said — a mathematical volcano that, as far as physicists were concerned, had gone dormant. The string theory model underlying monstrous moonshine was nothing like the particles or space-time geometry of the real world. But Cheng sensed that the new moonshine, if it was one, might be different. It involved K3 surfaces — the geometric objects that she and many other string theorists study as possible toy models of real space-time.

 

By the time she flew home from Paris, Cheng had uncovered more evidence that the new moonshine existed. She and collaborators John Duncan and Jeff Harvey gradually teased out evidence of not one but 23 new moonshines: mathematical structures that connect symmetry groups on the one hand and fundamental objects in number theory called mock modular forms (a class that includes the j-function) on the other. The existence of these 23 moonshines, posited in their Umbral Moonshine Conjecture in 2012, was proved by Duncan and coworkers late last year.

 

Meanwhile, Cheng, 37, is on the trail of the K3 string theory underlying the 23 moonshines — a particular version of the theory in which space-time has the geometry of a K3 surface. She and other string theorists hope to be able to use the mathematical ideas of umbral moonshine to study the properties of the K3 model in detail. This in turn could be a powerful means for understanding the physics of the real world where it can’t be probed directly — such as inside black holes. An assistant professor at the University of Amsterdam on leave from France’s National Center for Scientific Research, Cheng spoke with Quanta Magazine about the mysteries of moonshines, her hopes for string theory, and her improbable path from punk-rock high school dropout to a researcher who explores some of the most abstruse ideas in math and physics. An edited and condensed version of the conversation follows.

 

continues at....

 

https://www.quantama...-string-theory/



#32
Unity

Unity

    Information Organism

  • Members
  • PipPipPipPipPipPipPipPip
  • 2,437 posts
The Mathematical Shape of Things to Come

Scientific data sets are becoming more dynamic, requiring new mathematical techniques on par with the invention of calculus.

 

imon DeDeo, a research fellow in applied mathematics and complex systems at the Santa Fe Institute, had a problem. He was collaborating on a new project analyzing 300 years’ worth of data from the archives of London’s Old Bailey, the central criminal court of England and Wales. Granted, there was clean data in the usual straightforward Excel spreadsheet format, including such variables as indictment, verdict, and sentence for each case. But there were also full court transcripts, containing some 10 million words recorded during just under 200,000 trials.

 

“How the hell do you analyze that data?” DeDeo wondered. It wasn’t the size of the data set that was daunting; by big data standards, the size was quite manageable. It was the sheer complexity and lack of formal structure that posed a problem. This “big data” looked nothing like the kinds of traditional data sets the former physicist would have encountered earlier in his career, when the research paradigm involved forming a hypothesis, deciding precisely what one wished to measure, then building an apparatus to make that measurement as accurately as possible.

 

“In physics, you typically have one kind of data and you know the system really well,” said DeDeo. “Now we have this new multimodal data [gleaned] from biological systems and human social systems, and the data is gathered before we even have a hypothesis.” The data is there in all its messy, multi-dimensional glory, waiting to be queried, but how does one know which questions to ask when the scientific method has been turned on its head?

 

DeDeo is not the only researcher grapping with these challenges. Across every discipline, data sets are getting bigger and more complex, whether one is dealing with medical records, genomic sequencing, neural networks in the brain, astrophysics, historical archives or social networks. Alessandro Vespignani, a physicist at Northeastern University who specializes in harnessing the power of social networking to model disease outbreaks, stock market behavior, collective social dynamics, and election outcomes, has collected many terabytes of data from social networks such as Twitter, nearly all of it raw and unstructured. “We didn’t define the conditions of the experiments, so we don’t know what we are capturing,” he said.

 

Today’s big data is noisy, unstructured, and dynamic rather than static. It may also be corrupted or incomplete. “We think of data as being comprised of vectors – a string of numbers and coordinates,” said Jesse Johnson, a mathematician at Oklahoma State University. But data from Twitter or Facebook, or the trial archives of the Old Bailey, look nothing like that, which means researchers need new mathematical tools in order to glean useful information from the data sets. “Either you need a more sophisticated way to translate it into vectors, or you need to come up with a more generalized way of analyzing it,” Johnson said.

 

Vespignani uses a wide range of mathematical tools and techniques to make sense of his data, including text recognition. He sifts through millions of tweets looking for the most relevant words to whatever system he is trying to model. DeDeo adopted a similar approach for the Old Bailey archives project. His solution was to reduce his initial data set of 100,000 words by grouping them into 1,000 categories, using key words and their synonyms. “Now you’ve turned the trial into a point in a 1,000-dimensional space that tells you how much the trial is about friendship, or trust, or clothing,” he explained.

 

Continued at...

 

https://www.quantama...things-to-come/

 

 



#33
Unity

Unity

    Information Organism

  • Members
  • PipPipPipPipPipPipPipPip
  • 2,437 posts
The maths behind 'impossible' never-repeating patterns

 

themathsbehi.png

 

Remember the graph paper you used at school, the kind that's covered with tiny squares? It's the perfect illustration of what mathematicians call a "periodic tiling of space", with shapes covering an entire area with no overlap or gap. If we moved the whole pattern by the length of a tile (translated it) or rotated it by 90 degrees, we will get the same pattern. That's because in this case, the whole tiling has the same symmetry as a single tile. But imagine tiling a bathroom with pentagons instead of squares – it's impossible, because the pentagons won't fit together without leaving gaps or overlapping one another.

 

Patterns (made up of tiles) and crystals (made up of atoms or molecules) are typically periodic like a sheet of graph paper and have related symmetries. Among all possible arrangements, these regular arrangements are preferred in nature because they are associated with the least amount of energy required to assemble them. In fact we've only known that non-periodic tiling, which creates never-repeating patterns, can exist in crystals for a couple of decades. Now my colleagues and I have made a model that can help understand how this is expressed.

In the 1970s, physicist Roger Penrose discovered that it was possible to make a pattern from two different shapes with the angles and sides of a pentagon. This looks the same when rotated through 72-degree angles, meaning that if you turn it 360 degrees full circle, it looks the same from five different angles. We see that many small patches of patterns are repeated many times in this pattern. For example in the graphic below, the five-pointed orange star is repeated over and over again. But in each case these stars are surrounded by different shapes, which implies that the whole pattern never repeats in any direction. Therefore this graphic is an example of a pattern that has rotational symmetry but no translational symmetry.


Read more at: http://phys.org/news...tterns.html#jCp

 

57b1a389117e1.gif

 

 



#34
Unity

Unity

    Information Organism

  • Members
  • PipPipPipPipPipPipPipPip
  • 2,437 posts
New math model represents how mind processes sequential memory, may help understand psychiatric disorders
 

6-mathematical.jpg

 

Figure A shows a representation of a stable sequential working memory; different information items or memory patterns are shown in different colors. Credit: Image adopted from Rabinovich, M.I. et al. (2014)

 

Try to remember a phone number, and you're using what's called your sequential memory. This kind of memory, in which your mind processes a sequence of numbers, events, or ideas, underlies how people think, perceive, and interact as social beings.

 

"In our life, all of our behaviors and our process of thinking is sequential in time," said Mikhail Rabinovich, a physicist and neurocognitive scientist at the University of California, San Diego.

 

To understand how sequential memory works, researchers like Rabinovich have built mathematical models that mimic this process. In particular, he and a group of researchers have now mathematically modeled how the mind switches among different ways of thinking about a sequence of objects, events, or ideas, which are based on the activity of so-called cognitive modes.

 

The new model is described in the journal Chaos, from AIP Publishing, and it may help scientists understand a variety of human psychiatric conditions that may involve sequential memory, including obsessive-compulsive disorder, bipolar, and attention deficit disorder, schizophrenia and autism.

Cognitive modes are the basic states of neural activity. Thinking, perceiving, and any other neural activity incorporates various parts of the brain that work together in concert. When and where this activity occurs takes on well-defined patterns. And these patterns are called cognitive modes.

To understand what the researchers modeled, Rabinovich explained, think of the modes as competing figure skaters. You can describe the skaters in three ways. First, you can consider their backgrounds: their names or where they come from. You can also describe the technical aspects of their performances—how well they did that triple-toe-loop, for instance. Finally, you can understand their skating from a purely emotional or aesthetic perspective: their facial expressiveness, their costumes, or the simple beauty of movement. To fully comprehend the skaters, you have to constantly switch among these three perspectives. When these perspectives describe cognitive modes, they're called modalities.

 

7-mathematical.jpg

 

New math model represents how mind processes sequential memory, may help understand psychiatric disorders

Read more at: http://phys.org/news...iatric.html#jCp

  • Whereas likes this

#35
Unity

Unity

    Information Organism

  • Members
  • PipPipPipPipPipPipPipPip
  • 2,437 posts
Getting Into Shapes: From Hyperbolic Geometry to Cube Complexes and Back

A proof marks the end of an era in the study of three-dimensional shapes.

 

Thirty years ago, the mathematician William Thurston articulated a grand vision: a taxonomy of all possible finite three-dimensional shapes.

 

Thurston, a Fields medalist who spent much of his career at Princeton and Cornell, had an uncanny ability to imagine the unimaginable: not just the shapes that live inside our ordinary three-dimensional space, but also the far vaster menagerie of shapes that involve such complicated twists and turns that they can only fit into higher-dimensional spaces. Where other mathematicians saw inchoate masses, Thurston saw structure: symmetries, surfaces, relationships between different shapes.

 

“Many people have an impression, based on years of schooling, that mathematics is an austere and formal subject concerned with complicated and ultimately confusing rules,” he wrote in 2009. “Good mathematics is quite opposite to this. Mathematics is an art of human understanding. … Mathematics sings when we feel it in our whole brain.”

 

At the core of Thurston’s vision was a marriage between two seemingly disparate ways of studying three-dimensional shapes: geometry, the familiar realm of angles, lengths, areas and volumes, and topology, which studies all the properties of a shape that don’t depend on precise geometric measurements — the properties that remain unchanged if the shape gets stretched and distorted like Silly Putty.

 

To a topologist, the surface of a frying pan is equivalent to that of a table, a pencil or a soccer ball; the surface of a coffee mug is equivalent to a doughnut surface, or torus. From a topologist’s point of view, the multiplicity of two-dimensional shapes — that is, surfaces — essentially boils down to a simple list of categories: sphere-like surfaces, toroidal surfaces, and surfaces like the torus but with more than one hole. (Most of us think of spheres and tori as three-dimensional, but because mathematicians think of them as hollow surfaces, they consider them two-dimensional objects, measured in terms of surface area, not volume.)

 

Thurston’s key insight was that it is in the union of geometry and topology that three-dimensional shapes, or “three-manifolds,” can be understood. Just as the topological category of “two-manifolds” containing the surfaces of a frying pan and a pencil also contains a perfect sphere, Thurston conjectured that many categories of three-manifolds contain one exemplar, a three-manifold whose geometry is so perfect, so uniform, so beautiful that, as Walter Neumann of Columbia University is fond of saying, it “rings like a bell.” What’s more, Thurston conjectured, shapes that don’t have such an exemplar can be carved up into chunks that do.

 

In a 1982 paper, Thurston set forth this “geometrization conjecture” as part of a group of 23 questions about three-manifolds that offered mathematicians a road map toward a thorough understanding of three-dimensional shapes. (His list had 24 questions, but one of them, still unresolved, is more of an intriguing side alley than a main thoroughfare.)

 

“Thurston had this enormous talent for asking the right questions,” said Vladimir Markovic, a mathematician at the California Institute of Technology. “Anyone can ask questions, but it’s rare for a question to lead to insight and beauty, the way Thurston’s questions always seemed to do.”

 

These questions inspired a new generation of mathematicians, dozens of whom chose to pursue their graduate studies under Thurston’s guidance. Thurston’s mathematical “children” manifest his style, wrote Richard Brown of Johns Hopkins University. “They seem to see mathematics the way a child views a carnival: full of wonder and joy, fascinated with each new discovery, and simply happy to be a part of the whole scene.”

 

In the decades after Thurston’s seminal paper appeared, mathematicians followed his road map, motivated less by possible applications than by a realization that three-manifolds occupy a sweet spot in the study of shapes. Two-dimensional shapes are a bit humdrum, easy to visualize and categorize. Four-, five- and higher-dimensional shapes are essentially untamable: the range of possibilities is so enormous that mathematicians have limited their ambitions to understanding specialized subclasses of them. For three-dimensional shapes, by contrast, the structures are mysterious and mind-boggling, but ultimately knowable.

 

As Thurston’s article approached its 30th anniversary this year, all but four of the 23 main questions had been settled, including the geometrization conjecture, which the Russian mathematician Grigori Perelman proved in 2002 in one of the signal achievements of modern mathematics. The four unsolved problems, however, stubbornly resisted proof.

 

“The fact that we couldn’t solve them for so long meant that something deep was going on,” said Yair Minsky, of Yale University.

 

Finally, in March, Ian Agol, of the University of California at Berkeley, electrified the mathematics community by announcing a proof to “Wise’s conjecture,” which settled the last four of Thurston’s questions in one stroke.

 

Mathematicians are calling the result the end of an era.

 

“The vision of three-manifolds that Thurston articulated in his paper, which must have looked quite fantastic at the time, has now been completely realized,” said Danny Calegari, of the California Institute of Technology. “His vision has been remarkably vindicated in every way: every detail has turned out to be correct.”

“I used to feel that there was certain knowledge and certain ways of thinking that were unique to me,” Thurston wrote when he won a Steele mathematics prize this year, just months before he died in August at 65. “It is very satisfying to have arrived at a stage where this is no longer true — lots of people have picked up on my ways of thought, and many people have proven theorems that I once tried and failed to prove.”

 

Agol’s result means that there is a simple recipe for constructing all compact, hyperbolic three-manifolds — the one type of three-dimensional shape that had not yet been fully explicated.

 

“In a precise sense, we now understand what all three-manifolds look like,” said Henry Wilton, of University College London. “This is the culmination of a massive success story in mathematics.”

 

read more at...

 

https://www.quantama...lexes-and-back/

 

 



#36
Unity

Unity

    Information Organism

  • Members
  • PipPipPipPipPipPipPipPip
  • 2,437 posts

The Extraordinary Link Between Deep Neural Networks and the Nature of the Universe

Nobody understands why deep neural networks are so good at solving complex problems. Now physicists say the secret is buried in the laws of physics.

In the last couple of years, deep learning techniques have transformed the world of artificial intelligence. One by one, the abilities and techniques that humans once imagined were uniquely our own have begun to fall to the onslaught of ever more powerful machines. Deep neural networks are now better than humans at tasks such as face recognition and object recognition. They’ve mastered the ancient game of Go and thrashed the best human players.

 

But there is a problem. There is no mathematical reason why networks arranged in layers should be so good at these challenges. Mathematicians are flummoxed. Despite the huge success of deep neural networks, nobody is quite sure how they achieve their success.

 

Today that changes thanks to the work of Henry Lin at Harvard University and Max Tegmark at MIT. These guys say the reason why mathematicians have been so embarrassed is that the answer depends on the nature of the universe. In other words, the answer lies in the regime of physics rather than mathematics.

 

First, let’s set up the problem using the example of classifying a megabit grayscale image to determine whether it shows a cat or a dog.

 

Such an image consists of a million pixels that can each take one of 256 grayscale values. So in theory, there can be 2561000000 possible images, and for each one it is necessary to compute whether it shows a cat or dog. And yet neural networks, with merely thousands or millions of parameters, somehow manage this classification task with ease.

 

In the language of mathematics, neural networks work by approximating complex mathematical functions with simpler ones. When it comes to classifying images of cats and dogs, the neural network must implement a function that takes as an input a million grayscale pixels and outputs the probability distribution of what it might represent.

 

deep-learning-image.png?sw=600&cx=0&cy=1

 

The problem is that there are orders of magnitude more mathematical functions than possible networks to approximate them. And yet deep neural networks somehow get the right answer.

 

Now Lin and Tegmark say they’ve worked out why. The answer is that the universe is governed by a tiny subset of all possible functions. In other words, when the laws of physics are written down mathematically, they can all be described by functions that have a remarkable set of simple properties.

 

continued at...

 

https://www.technolo...f-the-universe/


  • Whereas likes this

#37
Unity

Unity

    Information Organism

  • Members
  • PipPipPipPipPipPipPipPip
  • 2,437 posts
How the Mathematics of Algebraic Topology Is Revolutionizing Brain Science

 

Nobody understands the brain’s wiring diagram, but the tools of algebraic topology are beginning to tease it apart.

The human connectome is the network of links between different parts of the brain. These links are mapped out by the brain’s white matter—bundles of nerve cell projections called axons that connect the nerve cell bodies that make up gray matter.

 

The conventional view of the brain is that the gray matter is primarily involved in information processing and cognition, while white matter transmits information between different parts of the brain. The structure of white matter—the connectome—is essentially the brain’s wiring diagram.

 

This structure is poorly understood, but there are several high-profile projects to study it. This work shows that the connectome is much more complex than originally thought. The human brain contains some 1010 neurons linked by 1014 synaptic connections. Mapping the way this link together is a tricky business, not least because the structure of the network depends on the resolution at which it is examined.

 

This work is also uncovering evidence that the white matter plays a much more significant role than first thought in learning and coordinating the brain’s activity.  But exactly how this role is linked to the structure is not known.

 

neural-cycles.jpg

 

So understanding this structure over vastly different scales is one of the great challenges of neuroscience; but one that is hindered by a lack of appropriate mathematical tools.

 

Today, that looks set to change thanks to the mathematical field of algebraic topology, which neurologists are gradually coming to grips with for the first time. This discipline has traditionally been an arcane pursuit for classifying spaces and shapes.  Now Ann Sizemore at the University of Pennsylvania and a few pals show how it is beginning to revolutionize our understanding of the connectome.

 

continues at...

 

https://www.technolo...-brain-science/


  • Whereas likes this

#38
MarcZ

MarcZ

    Chief Flying Car Critic

  • Members
  • PipPipPipPipPipPipPipPipPip
  • 3,241 posts
  • LocationCanada

Progress made towards the solution of another Millennium Problem - The Riemann Hypothesis.

 

http://www.scienceal...r-maths-problem


  • Unity likes this

#39
Jakob

Jakob

    Fenny-Eyed Slubber-Yuck

  • Members
  • PipPipPipPipPipPipPipPipPipPip
  • 5,166 posts
  • LocationIn the Basket of Deplorables

First time posting here...

 

Optimal Universal origami folding with more practical results

 

At the Symposium on Computational Geometry in July, Erik Demaine and Tomohiro Tachi of the University of Tokyo will announce the completion of a quest that began with a 1999 paper: a universal algorithm for folding origami shapes that guarantees a minimum number of seams.


  • Unity likes this

Click 'show' to see quotes from great luminaries.

Spoiler

#40
Unity

Unity

    Information Organism

  • Members
  • PipPipPipPipPipPipPipPip
  • 2,437 posts
http://www.wbur.org/...-gerrymandering


Interesting article on mathematicians fighting back against gerrymandering




0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users