Jump to content

Welcome to FutureTimeline.forum
Register now to gain access to all of our features. Once registered and logged in, you will be able to create topics, post replies to existing threads, give reputation to your fellow members, get your own private messenger, post status updates, manage your profile and so much more. If you already have an account, login here - otherwise create an account for free today!

These ads will disappear if you register on the forum

Photo

Math News and Dicussion


  • Please log in to reply
45 replies to this topic

#1
Unity

Unity

    Information Organism

  • Members
  • PipPipPipPipPipPipPipPip
  • 2,442 posts

<!-- I would like to request that this be pinned  -->

 

In  a series of papers posted online in recent weeks, mathematicians have solved a problem about the pattern-matching card game Set that predates the game itself. The solution, whose simplicity has stunned mathematicians, is already leading to advances in other combinatorics problems.

 
Invented in 1974, Set has a simple goal: to find special triples called “sets” within a deck of 81 cards. Each card displays a different design with four attributes — color (which can be red, purple or green), shape (oval, diamond or squiggle), shading (solid, striped or outlined) and number (one, two or three copies of the shape). In typical play, 12 cards are placed face-up and the players search for a set: three cards whose designs, for each attribute, are either all the same or all different.
 
Occasionally, there’s no set to be found among the 12 cards, so the players add three more cards. Even less frequently, there’s still no set to be found among the 15 cards. How big, one might wonder, is the largest collection of cards that contains no set?
 
The answer is 20 — proved in 1971 by the Italian mathematician Giuseppe Pellegrino. But for mathematicians, this answer was just the beginning. After all, there’s nothing special about having designs with only four attributes — that choice simply creates a manageable deck size. It’s easy to imagine cards with more attributes (for instance, they could have additional images, or even play different sounds or have scratch-and-sniff smells). For every whole number n, there’s a version of Set with n attributes and 3n different cards.
 
For each such version, we can consider collections of cards that contain no set — what mathematicians confusingly call “cap sets” — and ask how large they can be. Mathematicians have calculated the maximal size of cap sets for games with up to six attributes, but we’ll probably never know the exact size of the largest cap set for a game with 100 or 200 attributes, said Jordan Ellenberg, a mathematician at the University of Wisconsin, Madison — there are so many different collections of cards to consider that the computations are too mammoth ever to be carried out.
 
Yet mathematicians can still try to figure out an upper bound on how big a cap set can be — a number of cards guaranteed to hold at least one set. This question is one of the simplest problems in the mathematical field called Ramsey theory, which studies how large a collection of objects can grow before patterns emerge.
 
“The cap set problem we think of as a model problem for all these other questions in Ramsey theory,” said Terence Tao, a mathematician at the University of California, Los Angeles, and a winner of the Fields Medal, one of mathematics’ highest honors. “It was always believed that progress would come there first, and then once we’d sorted that out we would be able to make progress elsewhere.”
 
Yet until now, this progress has been slow. Mathematicians established in papers published in 1995 and 2012 that cap sets must be smaller than about 1/n the size of the full deck. Many mathematicians wondered, however, whether the true bound on cap set size might be much smaller than that.
 
They were right to wonder. The new papers posted online this month showed that relative to the size of the deck, cap set size shrinks exponentially as n gets larger. In a game with 200 attributes, for instance, the previous best result limited cap set size to at most about 0.5 percent of the deck; the new bound shows that cap sets are smaller than 0.0000043 percent of the deck.
 
The previous results were “already considered to be quite a big breakthrough, but this completely smashes the bounds that they achieved,” said Timothy Gowers, a mathematician and Fields medalist at the University of Cambridge.
 
There’s still room to improve the bound on cap sets, but in the near term, at least, any further progress is likely to be incremental, Gowers said. “In a certain sense this completely finishes the problem.”
 
Game, Set, Match
 
To find an upper bound on the size of cap sets, mathematicians translate the game into geometry. For the traditional Set game, each card can be encoded as a point with four coordinates, where each coordinate can take one of three values (traditionally written as 0, 1 and 2). For instance, the card with two striped red ovals might correspond to the point (0, 2, 1, 0), where the 0 in the first spot tells us that the design is red, the 2 in the second spot tells us that the shape is oval, and so on.  There are similar encodings for versions of Set with n attributes, in which the points have n coordinates instead of four.
 
The rules of the Set game translate neatly into the geometry of the resulting n-dimensional space: Every line in the space contains exactly three points, and three points form a set precisely when they lie on the same line. A cap set, therefore, is a collection of points that contains no complete lines.
 
SETPoint_1000.png
 
Previous approaches to getting an upper bound on cap set size used a technique called Fourier analysis, which views the collection of points in a cap set as a combination of waves and looks for the directions in which the collection oscillates. “The conventional wisdom was that this was the way to go,” Tao said.
 
Now, however, mathematicians have solved the cap set problem using an entirely different method — and in only a few pages of fairly elementary mathematics. “One of the delightful aspects of the whole story to me is that I could just sit down, and in half an hour I had understood the proof,” Gowers said.
 
The proof uses the “polynomial method,” an innovation that, despite its simplicity, only rose to prominence on the mathematical scene about a decade ago. The approach produces “beautiful short proofs,” Tao said. It’s “sort of magical.”
 
A polynomial is a mathematical expression built out of numbers and variables raised to powers — for instance, x2 + y2 or 3xyz3 + 2. Given any collection of numbers, it’s possible to create a polynomial that evaluates to zero at all those numbers — for example, if you pick the numbers 2 and 3, you can build the expression (x – 2)(x – 3); this multiplies out to the polynomial x2 – 5x + 6, which equals zero if x = 2 or x = 3. Something similar can be done to create polynomials that evaluate to zero at a collection of points — for instance, the points corresponding to Set cards.
 
At first glance, this doesn’t seem like a very deep fact. Yet somehow, these polynomials often seem to contain information that isn’t readily visible from the set of points. Mathematicians don’t fully understand, Ellenberg said, just why this approach works so well, and which types of problems it can be useful for. Until a few weeks ago, he added, he considered cap set “an example of a problem where the polynomial method really has no purchase.”
 
That changed on May 5, when three mathematicians — Ernie Croot of the Georgia Institute of Technology, Vsevolod Lev of the University of Haifa, Oranim, in Israel, and Péter Pál Pach of the Budapest University of Technology and Economics in Hungary — posted a paper online showing how to use the polynomial method to solve a closely related problem, in which each Set attribute can have four different options instead of three. For technical reasons, this problem is more tractable than the original Set problem.
 
In this game variant, for any collection of cards with no set, Croot, Lev and Pach considered which additional cards could be laid down on the table to complete a set. They then built a polynomial that evaluates to zero on these completion cards, and figured out an ingeniously simple way to split the polynomial into pieces with smaller exponents, which led to a bound on the size of collections with no sets. It was a “very inventive move,” Ellenberg said. “It’s always incredibly cool when there’s something truly new and it’s easy.”
 
The paper soon set off a cascade of what Ellenberg called “math at Internet speed.”  Within 10 days, Ellenberg and Dion Gijswijt, a mathematician at Delft University of Technology in the Netherlands, had each independently posted papers showing how to modify the argument to polish off the original cap set problem in just three pages. Yesterday, they posted a joint paper combining their results. The trick, Ellenberg said, is to realize that there are many different polynomials that evaluate to zero on a given set of points, and that choosing just the right one gets “a little bit more juice out of the method.” A cap set, the new proofs establish, can be at most (2.756/3)n as large as the whole deck.
 
Mathematicians are now scrambling to figure out the implications of the new proof. Already, a paper has been posted online showing that the proof rules out one of the approaches mathematicians were using to try to create more efficient matrix multiplication algorithms. And on May 17, Gil Kalai, of the Hebrew University of Jerusalem, wrote an “emergency” blog post pointing out that the cap set result can be used to prove the “Erdős-Szemerédi sunflower conjecture,” which concerns sets that overlap in a sunflower pattern.
 
“I think a lot of people will be thinking, ‘What can I do with this?’” Gowers said. Croot, Lev and Pach’s approach, he wrote in a blog post, is “a major new technique to add to the toolbox.”
 
The fact that the cap set problem finally yielded to such a simple technique is humbling, Ellenberg said. “It makes you wonder what else is actually easy.”
 

 


  • Outlook likes this

#2
Unity

Unity

    Information Organism

  • Members
  • PipPipPipPipPipPipPipPip
  • 2,442 posts

zero_set_of_discriminant_of_restricted_q

This image by Greg Egan shows the set of points (a,b,c)(a,b,c) for which the quintic x5+ax4+bx2+cx5+ax4+bx2+c has repeated roots. The plane c=0c=0 has been removed.
 
The fascinating thing about this surface is that it appears to be diffeomorphic to two other surfaces, defined in completely different ways, which we discussed here:
 
• Involutes of a cubical parabola.
 
• Discriminant of the icosahedral group.
 
The icosahedral group is also called H3H3. In his book The Theory of Singularities and its Applications, V. I. Arnol’d writes:
 
The discriminant of the group H3H3 is shown in Fig. 18. Its singularities were studied by O. V. Lyashko (1982) with the help of a computer. This surface has two smooth cusped edges, one of order 3/2 and the other of order 5/2. Both are cubically tangent at the origin. Lyashko has also proved that this surface is diffeomorphic to the set of polynomials x5+ax4+bx2+cx5+ax4+bx2+c having a multiple root.
 
Figure 18 of Arnold’s book is a hand-drawn version of the surface below:
 
H3_discriminant.png
 
Arnol’d’s claim that the discriminant of H3H3 is diffeomorphic to the set of polynomials x5+ax4+bx2+cx5+ax4+bx2+c having a repeated root is not literally true, since all such polynomials with c=0c=0 have a repeated root, and we need to remove this plane to obtain a surface that looks like the discriminant of H3H3. After this correction his claim seems right, but it still deserves proof.
 
Puzzle. Can you prove the corrected version of Arnol’d’s claim?
 
References
Arnol’d’s claim appear on page 29 here:
 
• Vladimir I. Arnol’d, The Theory of Singularities and its Applications, Cambridge U. Press, Cambridge, 1991.
 
The following papers are also indirectly relevant:
 
• Vladimir I. Arnol’d, Singularities of systems of rays, Uspekhi Mat. Nauk 38:2 (1983), 77-147. English translation in Russian Math. Surveys 38:2 (1983), 77–176.
 
• O. Y. Lyashko, Classification of critical points of functions on a manifold with singular boundary, Funktsional. Anal. i Prilozhen. 17:3 (1983), 28–36. English translation in Functional Analysis and its Applications 17:3 (1983), 187–193
 
• O. P. Shcherbak, Singularities of a family of evolvents in the neighbourhood of a point of inflection of a curve, and the group H3H3 generated by reflections, Funktsional. Anal. i Prilozhen. 17:4 (1983), 70–72. English translation in Functional Analysis and its Applications 17:4 (1983), 301–303.
 
• O. P. Shcherbak, Wavefronts and reflection groups, Uspekhi Mat. Nauk 43:3 (1988), 125–160. English translation in Russian Mathematical Surveys 43:3 (1988), 1497–194.
 
All these sources discuss the discoveries of Arnol’d and his colleagues relating singularities and Coxeter–Dynkin diagrams, starting with the more familiar ADEADE cases, then moving on to the non-simply-laced cases, and finally the non-crystallographic cases related to H2H2 (the symmetry group of the pentagon), H3H3 (the symmetry group of the icosahedron) and H4H4 (the symmetry group of the 600-cell).
 
Visual Insight is a place to share striking images that help explain advanced topics in mathematics. I’m always looking for truly beautiful images, so if you know about one, please drop a comment here and let me know!
 


#3
Unity

Unity

    Information Organism

  • Members
  • PipPipPipPipPipPipPipPip
  • 2,442 posts
Two-hundred-terabyte maths proof is largest ever
 
A computer cracks the Boolean Pythagorean triples problem — but is it really maths?
 
Evelyn Lamb
 
26 May 2016
 
_MG_7020sdp_web.png
 
The University of Texas’s Stampede supercomputer, on which the 200-terabyte maths proof was solved.
 
Three computer scientists have announced the largest-ever mathematics proof: a file that comes in at a whopping 200 terabytes1, roughly equivalent to all the digitized text held by the US Library of Congress. The researchers have created a 68-gigabyte compressed version of their solution — which would allow anyone with about 30,000 hours of spare processor time to download, reconstruct and verify it — but a human could never hope to read through it.
 
Computer-assisted proofs too large to be directly verifiable by humans have become commonplace, and mathematicians are familiar with computers that solve problems in combinatorics — the study of finite discrete structures — by checking through umpteen individual cases. Still, “200 terabytes is unbelievable”, says Ronald Graham, a mathematician at the University of California, San Diego. The previous record-holder is thought to be a 13-gigabyte proof2, published in 2014.
 
The puzzle that required the 200-terabyte proof, called the Boolean Pythagorean triples problem, has eluded mathematicians for decades. In the 1980s, Graham offered a prize of US$100 for anyone who could solve it. (He duly presented the cheque to one of the three computer scientists, Marijn Heule of the University of Texas at Austin, earlier this month.) The problem asks whether it is possible to colour each positive integer either red or blue, so that no trio of integers a, b and c that satisfy Pythagoras’ famous equation a2 + b2 = c2 are all the same colour. For example, for the Pythagorean triple 3, 4 and 5, if 3 and 5 were coloured blue, 4 would have to be red.
 
In a paper posted on the arXiv server on 3 May, Heule, Oliver Kullmann of Swansea University, UK, and Victor Marek of the University of Kentucky in Lexington have now shown that there are many allowable ways to colour the integers up to 7,824 — but when you reach 7,825, it is impossible for every Pythagorean triple to be multicoloured1. There are more than 102,300 ways to colour the integers up to 7,825, but the researchers took advantage of symmetries and several techniques from number theory to reduce the total number of possibilities that the computer had to check to just under 1 trillion. It took the team about 2 days running 800 processors in parallel on the University of Texas’s Stampede supercomputer to zip through all the possibilities. The researchers then verified the proof using another computer program.

 

solution-7824%20copy.jpg

 

The numbers 1 to 7,824 can be coloured either red or blue so that no trio a, b and c that satisfies a2 +b2 = c2 is all the same colour. The grid of 7,824 squares here shows one such solution, with numbers coloured red or blue (a white square can be either). But for the numbers 1 to 7,825, there is no solution.
Expand
Facts vs theory
The Pythagorean triples problem is one of many similar questions in Ramsey theory, an area of mathematics that is concerned with finding structures that must appear in sufficiently large sets. For example, the researchers think that if the problem had allowed three colours, rather than two, they would still hit a point where it would be impossible to avoid creating a Pythagorean triple where a, b and c were all the same colour; indeed, they conjecture that this is the case for any finite choice of colours. Any proof for more colours will probably be much larger even than the 200-terabyte 2-colour proof, unless researchers can simplify the case-by-case checking process with a breakthrough in understanding.
 
Although the computer solution has cracked the Boolean Pythagorean triples problem, it hasn’t provided an underlying reason why the colouring is impossible, or explored whether the number 7,825 is meaningful, says Kullmann. That echoes a common philosophical objection to the value of computer-assisted proofs: they may be correct, but are they really mathematics? If mathematicians’ work is understood to be a quest to increase human understanding of mathematics, rather than to accumulate an ever-larger collection of facts, a solution that rests on theory seems superior to a computer ticking off possibilities.
 
That did ultimately occur in the case of the 13-gigabyte proof from 2014, which solved a special case of a question called the Erdős discrepancy problem. A year later, mathematician Terence Tao of the University of California, Los Angeles, solved the general problem the old-fashioned way3 — a much more satisfying resolution.
 
Nature 534, 17–18 (02 June 2016) doi:10.1038/nature.2016.19990
 
References
 
Heule, M. J. H., Kullmann, O. & Marek, V. W. Preprint at http://arxiv.org/abs/1605.00723 (2016).
 
 
Konev, B. & Lisitsa, A. Preprint at http://arxiv.org/abs/1402.2184 (2014).
 
Tao, T. Preprint at http://arxiv.org/abs/1509.05363 (2015)
 


#4
ddmkm122

ddmkm122

    Member

  • Members
  • PipPipPipPipPip
  • 336 posts
  • LocationRochester, New York, The United States Of America.

 

Two-hundred-terabyte math proof is largest ever.

Cool!

 

I would like to request that this be pinned!

This.


  • Unity likes this

#5
Unity

Unity

    Information Organism

  • Members
  • PipPipPipPipPipPipPipPip
  • 2,442 posts

56f956b675a18.jpg

 

(Phys.org)—As Yves Meyer was getting ready to publish a detailed mathematical proof that he had spent months working on, he decided do a final search of the existing literature. In the reference list of one of the papers he had just peer-reviewed, he noticed what he describes as a "bizarre" paper published in 1959 by Andrew Paul Guinand. Upon further investigation, he was shocked to discover that Guinand had formulated the exact same proof to solve the same problem that Meyer had been working on, though the solution had remained deeply buried and completely forgotten.
 
Meyer, a Professor Emeritus at the École Normale Supérieure de Cachan, accordingly revised and published his paper, which appeared just a few weeks ago in the Proceedings of the National Academy of Sciences. In his work, he proves that there is not just one, but many Poisson summation formulas, using a simpler solution than was previously known.
 
Meyer—who has spent his career making fundamental contributions to wavelet theory and number theory, and recently won the Gauss Prize—explains that at first he was somewhat embarrassed that someone else had made the same discovery many decades earlier. However, he also interprets the experience as an example of a more universal pattern: that all of human discovery builds on what comes before.
"Suddenly I understood what I have been steadily doing in my scientific life," Meyer told Phys.org. "I was transmitting a heritage. Today I can express my gratitude to Guinand, who was a great person, both as a human being and as a mathematician."
 
Not just one Poisson formula
 
Meyer has spent much of his career investigating the mathematical properties of oscillations. One question that arises is, what happens when large numbers of sines or cosines interfere with one another? This interference occurs in many physical scenarios, such as X-ray crystallography, a technique used to study crystals. The Poisson formula is the tool that researchers use to understand this interference.
 
The original Poisson formula, which was discovered in the early 1800s by Siméon Denis Poisson, equates two sums. The first sum involves an infinite number of trigonometric functions being added together, and the second one an infinite number of "spikes" being added. For a simple example, the first sum is ½ + cos(x) + cos(2x) + cos(3x) + cos(4x) +…. This sum can add to either zero or infinity, and which answer you get depends on the value of the variable x. In this example, when x is any number that is not a multiple of 2π, then the sum is zero, corresponding to destructive interference. Meyer explains that this situation can be thought of as all of the numbers cos(x), cos(2x), cos(3x),… being evenly situated around 0, so that even though there is an infinite number of numbers, when added, they all cancel each other out.
 
On the other hand, when the variable x is some multiple of 2π, then 1 = cos(x) = cos(2x) = cos(3x)…, so their sum spikes to infinity, corresponding to constructive interference. This sum of spikes is the second sum.
The big question that Meyer was investigating was whether other Poisson formulas might exist with completely distinct arithmetic properties. For a long time, mathematicians believed that the original formula was the only Poisson formula.
 
But then in late 2015, Meyer was reviewing a new paper by Nir Lev and Alexander Olevskii that proved that other Poisson formulas do indeed exist. This paper inspired Meyer's new proof, and was also the paper that referenced Guinand's 1959 paper—though of course Lev and Olevskii had not noticed the more elegant proof buried within this paper.
 
"I was enthused by Lev and Olevskii's discovery, and I lectured on it with Olevskii's permission," Meyer said. "Then a much simpler solution unconsciously came to my mind. I wrote a detailed proof. Before publishing it, I wanted to check the existing literature. In the reference list of the Lev and Olevskii article, a bizarre paper by A.P. Guinand attracted my attention. This odd paper was published in 1959. I went to a mathematical library and I was overwhelmed by surprise and shame. Indeed, Guinand used my solution to solve the problem raised by Lev and Olevskii. You had to read between the lines in Guinand's article to check what I am saying."
Throughout his paper, Meyer gives credit to Guinand and even names one of Guinand's ideas after the late mathematician, calling it "Guinand's distribution."
 
A story that continues to be written
 
Although in one sense it is surprising that one of the same problems in modern mathematical research was being worked on more than half a century ago, in another sense that is just the nature of the field. Meyer explains that he collaborated with and admired many highly regarded mathematicians throughout his career (Alberto Calderón, Alexander Grossmann, and Jean Morlet, among others), and that their ideas are continually being reanalyzed, reshaped, and sometimes even applied to new innovations.
 
One prominent example of a new application can be found in the recent detection of gravitational waves. The algorithm used in the discovery contains something called the "Meyer wavelet," which Meyer proposed 25 years ago, and work by other mathematicians has also played an important role. Overall, wavelet theory (which Meyer helped develop over the past several decades) has applications in analyzing data from audio signals, images, electronic signals, and in general to extract information from many different kinds of signals.
 
All of these recent developments have their roots in past research—often, Meyer explains, from the distant past.
"And the story does not end there," Meyer said. "Today I am continuing my research on Olevskii's problem. I discovered an ancient paper by my former advisor Jean-Pierre Kahane. It is a joint work with Sjolem Mandelbrojt. Then I went further into the past and could trace everything back to the Epstein zeta function, which was discovered and studied in 1903. But this is another story."
 
 Explore further: The dark side of spring? Pollution in our melting snow
 
More information: Yves F. Meyer. "Measures with locally finite support and spectrum." PNAS. DOI: 10.1073/pnas.1600685113 
 
Journal reference: Proceedings of the National Academy of Sciences
 
 


#6
Infinite

Infinite

    Member

  • Members
  • PipPipPipPipPipPip
  • 649 posts
  • LocationDublin, Ireland
Super interesting! But my brain has fried.

Also, please pin this thread. :)
  • Unity likes this

Is minic an fhírinne searbh.


#7
Unity

Unity

    Information Organism

  • Members
  • PipPipPipPipPipPipPipPip
  • 2,442 posts

The central idea in statistics is that you can say something about a whole population by looking at a smaller sample. Without this idea there wouldn't be opinion polls or election forecasts, there would be no way of testing new medical drugs, or the safety of bridges, etc, etc. It's the central limit theorem that is to a large extent responsible for the fact that we can do all these things and get a grip on the uncertainties involved.

 

crowd.jpg

 

What's the average weight in the population?

 

To see how the theorem works, imagine that you want to know the average weight of the population in the UK. You go out and measure the weight of, say, 100 people whom you've randomly chosen and work out the average for this group — call this the sample average. Now the sample average is supposed to give you a good idea of the nation's average. But what if you happened to pick only big people for your sample, or only very skinny ones?

 

To get an idea of how representative your average is likely to be, you need to know something about how the average weight of 100-people-samples varies over the population: if you took lots and lots of samples of size 100 and worked out the average weight for each, then how variable would this set of numbers be? And what would its average (the average of averages) be compared to the true average weight in the population?

 

For example, suppose you know that if you took lots and lots of 100- people-samples and wrote down the average weight of each sample, you'd get all values from 10kg to 300kg in equal proportion. Then this would tell you that your method of estimating the overall average by taking one sample of a 100 people isn't a very good one, because there's too much variability — you're just as likely to get any of the possible values, and you don't know which one is closest to the true average weight in the population.

 

normal.jpg

 

Various examples of the normal distribution, with different means and variances.

 

So how can we say anything about the distribution of 100-people-averages — called the sampling distribution — when we don't know anything about the distribution of weight across the population? This is where the central limit theorem comes in: it says that for a big enough sample your sampling distribution is approximated by a normal distribution — this is the distribution with the famous bell shape. (A convention is that a sample size of 30 is good enough.)

 

The mean of this normal distribution (the average of averages corresponding to the tip of the bell) is the same as the mean in the population (the average weight of the population). The variance of this normal distribution, that is how much it varies about the mean (indicated by the width of the bell), depends on the sample size: the larger the sample, the smaller the variance. There's an equation which gives the exact relationship.

So if your sample size is big enough (100 would certainly do since it's bigger than 30), then the relatively small variance of the normal sampling distribution means that the average weight you observe is close to the mean of that normal distribution (since the bell is quite narrow). And since the mean of that normal distribution is equal to the true average weight across the population, your observed average is a good approximation of the true average.

 

You can make all this precise, for example you can say exactly how confident you are that the true average is within a certain distance of your sample average, and you can also use the result to calculate how large a sample you need to get an estimate of a given accuracy. It's the central limit theorem that lends precision to the art of statistical inference, and it's also behind the fact that the normal distribution is so ubiquitous.

 

The central limit theorem is actually a bit more general than we've let on here. For a precise statement, see here.

 

https://plus.maths.o...l-limit-theorem



#8
Unity

Unity

    Information Organism

  • Members
  • PipPipPipPipPipPipPipPip
  • 2,442 posts

Analysing Ada by Marianne Freiberger

 

Submitted by mf344 on December 17, 2015
 

It's a curious fact that female pioneers in maths and science require extra scrutiny. Because we need them as role models, we need to be doubly sure they weren't flukes or imposters. Their work, even their characters, can be exposed to analyses much more minute than many men have had to endure. It's ironic, really.

 

ada_lovelace_portrait-320x459.jpg

 

A famous watercolour portrait of Ada Lovelace.

 

Ada Lovelace is a prime example of this. Born 200 years ago, in 1815, she is hailed as the world's first computer programmer; a visionary who anticipated modern computers long before they were invented. Since 2009 Ada Lovelace Day has been harnessing her name for an annual celebration of women in science, technology, engineering and maths. This year has seen the publication of a brilliant graphic novel by Sydney Padua, celebrating Lovelace and her mathematical partner, Charles Babbage, and drawing a lot on clashes of stereotypes.

 

On the other hand, Lovelace has been branded a manic-depressive with the "most amazing delusions about her own talents", her contributions invented by Babbage to give his work romantic flair, and subsequently blown out of proportion by over-eager admirers. She has been called brilliant and she's been called stupid, and it seems we need to make up our minds as to one or the other. So which is it?

 

The Ada Lovelace Symposium, which took place over two days including Lovelace's 200th birthday (December 10th 2015) at the University of Oxford, had a good go at answering the question. The event brought together mathematicians, computer scientists, historians, authors, and all sorts of other scholars. And it gave a fascinating insight into Lovelace's life, work and legacy. (You can watch some of the talks of the symposium here, definitely worth a look!)

 

Lovelace and Babbage

 

Ada Lovelace was born on the 10th of December 1815, the daughter of Lord Byron and Anne Isabella Milbanke. Lovelace never met her father. After a rocky relationship with her mother and an incestuous relationship with his half-sister, Byron left the country soon after Lovelace's birth and died abroad when she was eight. Her mother arranged a broad education for Lovelace, emphasising one of her own favourite topics, mathematics, hoping it would assuage any demons Lovelace might have inherited from her notorious father.

 

You can also read the article Ada Lovelace: Visions of today
 

But Lovelace wasn't a tame character. Curious and imaginative, she toyed with the idea of building flying machines even as a child. The wings, she noted, should be proportional to the body size of the machine (an early indication of scientific insight), but the body would be that of a horse. Machines of all sorts attracted her interest, being the marvels of the industrial revolution, as did a wealth of other scientific advances. She's been described as charismatic and popular, and was adventurous not only in science. At 16 she had an affair, and tried to elope, with one of her tutors, and throughout her married life engaged in various flirtations. Her husband saw himself compelled to destroy many of her letters for their scandalous contents. She reportedly swore a lot, lost a fortune to gambling later in life, and, it seems, had a penchant for alcohol and opium. Not exactly a Victorian stereotype.

 

babbage.jpg

 

Charles Babbage.

 

In 1833, aged only 17, Lovelace met the mathematician, inventor and general polymath Charles Babbage at a society do. The two were introduced by Lovelace's tutor, the Scottish scientist Mary Somerville. Babbage was something of an enfant terrible, not afraid of voicing his opinions. At Cambridge, he had been unhappy with the type of maths that was taught and the notation that was used, and formed a student society as a result. He had refused to sit his final examination, reportedly because he knew he wouldn't come first. He was the author of a scathing attack on the president of the eminent Royal Society, whose council he described as "a collection of men who elect each other to office and then dine together at the expense of this society to praise each other over wine and give each other medals". His unhappiness with the way learned societies were run eventually led to the formation of the British Association for the Advancement of Science (now the British Science Association) in 1831. During the 1830s Babbage also embarked on a stint in politics, standing for Parliament twice, both times unsuccessfully.

 

Mathematically and scientifically, Babbage had quite a lot under his belt. Amongst other things, he had played an important role in founding the Astronomical Society in 1820. It was while checking astronomical tables in 1821, and finding error after error, that Babbage exclaimed "I wish to God these calculations had been executed by steam". A strange proposition for modern minds, but steam was what powered Babbage's age. The idea that mathematics, until then confined to people's heads, could be mechanised in this way must have been a genuine novelty for many people.

 

When Lovelace met Babbage, he had already acted on his impulse: he was busy working on a calculating machine he called the difference engine, which, so it was hoped, would churn out accurate trigonometric and logarithmic tables on the crank of a handle. In 1823 the British government had given Babbage £1500 (a huge amount at the time) to produce the engine, but the actual construction was far from straight-forward. By the time the government finally gave up and withdrew its support in 1842, Babbage had spent around £17,000 of treasury money and a lot of his own. Sadly, the machine was never finished. (You can see a portion of it, as well as a version of Babbage's improved difference engine, built in 1991, at the Science Museum in London).

 

Babbage wasn't defeated, however. By 1834 he had already come up with the idea for a machine that was bigger, better, and more powerful than the difference engine. It's this analytical engine, as it was called, that enabled Lovelace to make her mathematical mark.

 

The analytical engine

 

difference_engine.jpg

 

A version of the difference engine at the Science

Museum in London. The analytical engine was never built.

 

Image: Wikimedia commons.

 

To this day the analytical engine has never been built in full. That's no surprise. It's a hugely complicated machine, made of many thousands of wheels, as well as axels, levers, barrels and a vast quantity of cams, clutches, cranks and whatever else befits such a huge mechanical monster. It would have stood over 4 metres tall, and at least about 5 metres long: the size of a small railway locomotive.

 

It's not exactly what we today associate with the word "computer", but there are crucial similarities. The engine was made up of a processing unit where operations were performed, a memory where numbers were stored, a reader to read in data, and a printer to return results. Like the Jacquard looms of the time, which could weave complex patterns mechanically, the analytical engine was to operate on punched cards. "The analytical engine weaves algebraic patterns, just as the Jacquard loom weaves flowers and leaves," is how Lovelace famously described the machine. We won't go into the details here, but had it been built, the analytical engine would have been able to perform a huge range of mathematical tasks. It would have been a genuine programmable computer. (You can see beautiful descriptions of the engine's workings on Sidney Padua's website and in Menabrea's 1942 Sketch of the analytical engine.)

 

Lovelace was enchanted with Babbage's ideas, and in time became much more than an admiring bystander. By 1843 she had decided she'd force the analytical engine to fruition, informing her mother that Babbage was "beyond measure careless and desultory at times. — I shall be willing to be his Whipper-in during the next 3 years if I see fair prospect of success." Her correspondence with Babbage could be equally direct. "My own uncompromising principle is to endeavour to love truth and God before fame and glory ...", she wrote in 1843, musing over their respective motivations. "Yours is to love truth and God ... but to love fame, glory, honours, yet more." In her letters to Babbage she could be bossy and arrogant as well as charming and coquettish.

 

The feverish bout of correspondence this quote is part of centred on a paper which described the analytical engine in some detail. Its author was the Italian Luigi Federico Menabrea who had seen Babbage lecture about it in 1841. Lovelace could easily have produced such a paper herself, but as Babbage later recalled, the thought simply didn't occur to her. Instead, she decided to translate Menabrea's paper into English.

The translation was Lovelace's chance to share what she knew about Babbage's wondrous machine. The version that was eventually published contained a copious amount of her own notes. "The notes of the Countess of Lovelace extend to about three times the length of the original memoir," Babbage wrote in his autobiography. "Their author has entered fully into almost all the very difficult and abstract questions connected with the subject."

 

It's these notes that gained Lovelace her fame. They contain what many view as the first ever example of a computer program and Lovelace's vision as to what Babbage's engine really signified. Continue to the next page: programs and visions.

 

https://plus.maths.o...t/analysing-ada



#9
Unity

Unity

    Information Organism

  • Members
  • PipPipPipPipPipPipPipPip
  • 2,442 posts

Analysing Ada, continued by Marianne Freiberger

 

Submitted by mf344 on December 17, 2015
 

Click here for the first part of this article.

 

Programs and visions

 

The famous first computer program wasn't a program as we'd recognise it today. Its purpose was to compute the Bernoulli numbers: a sequence of infinitely many numbers that are important in maths. You can calculate each Bernoulli number from the ones that came before it. It was Lovelace's own idea to use those numbers to demonstrate the engine's power. And dealing with them wasn't easy. "I am in much dismay at having got into so amazing a quagmire and botheration with these Numbers, that I cannot possibly get the thing done today [...]. I am now going out on horseback. Tant mieux," she wrote to Babbage.

 

diagram_small.jpg

 

A diagram explaining the Bernoulli program. Click here to see a larger version.

 

The note she eventually completed (you can see it here) describes how then engine would compute the Bernoulli number B7, from the previous Bernoulli numbers B1, B3 and B5. The idea was that the reader of Lovelace's note would be able to gauge from this, and her more general mathematical remarks, how the engine would go about computing any of the other Bernoulli numbers using the same principle. Essentially, the note showed how the task can be broken into instructions a machine would be able obey and perform: it was the outline, the "trace", of a program. Not quite a full algorithm but, as the curator and historian Doron Swade put it at the Oxford symposium, something that has "algorithmic intention".

 

Close scrutiny shows that this scheme for computing the Bernoulli numbers would have worked (at least on a modern computer). It contained a couple of bugs, but then Lovelace and Babbage weren't able to test it on a functioning engine, and as any computer programmer knows, the chances of producing a bug-free program at the first attempt are minuscule.

A question people have been debating is to what extent this Bernoulli program was Lovelace's own work. It seems that Babbage did help Lovelace with the maths. "I suggested several [possible examples], but the selection was entirely her own," he wrote. "So also was the algebraic working out of the different problems, except, indeed, that relating to the numbers of Bernoulli, which I had offered to do to save Lady Lovelace the trouble. This she sent back to me for an amendment, having detected a grave mistake which I had made in the process." But doing the underlying maths is only part of the problem of producing a program and it doesn't seem entirely clear to what extent Lovelace did the program part herself. At any rate, according to Swade, Lovelace's note doesn't contain any ideas that don't find precedence in Babbage's work. But this doesn't mean Lovelace's input was negligible. Her contribution lies with her overarching vision of the machine's power.

Lovelace noticed that "the analytical engine is an embodying of the science of operations, constructed with peculiar reference to abstract number as the subject of those operations". But why, she asked, should numbers be the be all and end all? "[The analytical engine] might act upon other things besides number, were objects found whose mutual fundamental relations could be expressed by those of the abstract science of operations [...]," she wrote. "Supposing, for instance, that the fundamental relations of pitched sounds in the science of harmony and of musical composition were susceptible of such expression and adaptations, the engine might compose elaborate and scientific pieces of music of any degree of complexity or extent."

 

analyticalmachine_babbage_london.jpg

 

Model of part of the analytical engine at the Science Museum in London. Image: Wikimedia Commons.

With this, she recognised the universality of the machine and foresaw the power of modern computers to be more than mere calculators and also bring us music, movies, pictures and more. It's a universality that Babbage himself didn't recognise, thinking of his engine as a purely mathematical tool to the end of his life.

 

Lovelace also noticed something we all take for granted today, but which at the time must have appeared new: that the mechanical aspect of the machine, the wheels and cogs and cams, have nothing to do with the programming of it. We all experience this every day: many of us are adept at getting our computer to do all manner of things, for example, calculating sums using spreadsheets, but few of us know much about the actual physical workings of the machine. "[The two] are indissolubly connected, though so different in their intrinsic nature, that perhaps the same mind might not be likely to prove equally profound or successful in both." Lovelace realised that a good engineer doesn't necessarily make a good programmer, and vice versa, and that the two are essentially different jobs.

 

Lovelace also thought about the true nature of the engine, guarding against the temptation of elevating it to a thinking machine. "The analytical engine has no pretensions whatever to originate anything," she wrote. "It can do whatever we know how to order it to perform. It can follow analysis; but it has no power of anticipating any analytical relations or truths." The famous computing pioneer Alan Turing picked up this idea around a century later, calling it "Lady Lovelace's objection". It's the possibility of artificial intelligence Lovelace objected against (with reference to the analytical engine) — and seeing how difficult artificial intelligence is proving to master, her objection still holds some truth even for modern computers.

 

Maths and kaleidoscopes

 

ada_lovelace.jpg

 

Another famous portrait of Ada Lovelace, aged around 19.

 

It seems that in these notes Lovelace's vision overshadows her mathematical prowess. The latter has recently been assessed further, by historian of mathematics Christopher Hollings, who had a good look at Lovelace's correspondence with one of her tutors, Augustus De Morgan. In it Hollings observed an able, though at times impatient Lovelace, who struggled with algebra and gaps in her knowledge, but displayed determination, attention to detail, and some grasp of how research mathematics progresses. The correspondence shows, so Hollings, that Lovelace was neither stupid, nor a mathematical genius, but somewhere in between.

 

The analytical engine wasn't Lovelace's only interest, far from it. She loved animals, especially horses, and developed an interest in animal intelligence. She was fascinated by the construction of railways and bridges, the electrical telegraph, daguerreotype, steam boats, electrical induction and Faraday's early field experiments. Notably, she planned to produce a mathematical model, a calculus, of the nervous system. It's a brilliant idea, modern neuroscientists are indeed busy trying to describe the brain using maths, but completely infeasible. The human brain is far too complex do be described by a simple model, but that's a fact she couldn't have been aware of.

 

Sadly, Lovelace died early, at the age of 36, from cancer. It's hard to know what she would have gone on to do, or what she would make of today's computerised world. It's even harder to know what her legacy (if any) would have been had she been a man. Judging by the Oxford symposium, Lovelace may not have been a mathematical genius excelling at algebraic manipulation, but she certainly wasn't "stupid". It's Lovelace's remarkable vision and imagination her modern-day fame deservedly rests on. Besides rigorous academic talks the symposium also saw Sydney Padua presenting her graphic novel and composer Emily Howard talking about Ada Sketches, part of a Lovelace trilogy. With such a broad legacy, and eclectic personal interests, it's fitting that the last object Lovelace bought before her death was, apparently, a kaleidoscope.

 

About the author

 

Marianne Freiberger is Editor of Plus.

 

https://plus.maths.o...g-ada-continued



#10
Unity

Unity

    Information Organism

  • Members
  • PipPipPipPipPipPipPipPip
  • 2,442 posts

Dyck Words

 

dyck_words.png

 

This picture by Tilman Piesk shows the 14 Dyck words of length 8. A Dyck word is a balanced string of left and right parentheses, like these:

 

(((())))

((()()))

((())())(()(()))

((()))()(()()())()((()))

(()())()(())(())()(()())

(())()()()(())()()()(())

()()()()

These are, in fact, all 14 Dyck words of length 8.

 

The condition of being balanced says that as we read a Dyck word from left to right there is at each stage at least as many left parentheses as right ones, with an equal number by the time we reach the end. In the picture, a left parenthesis is shown as upward-slanting line segment, and a right parenthesis as a downward-slanting one. The condition of being balanced then translates into the fact that the resulting ‘mountain range’ starts and ends at ground level, and never goes below ground.

 

The picture uses this way of drawing to illustrate a certain partial order on the set of Dyck words: ww iff the mountain range corresponding to w is at least as high at all points as the mountain range corresponding to w.

 

This partial order makes the set of Dyck words into a lattice: any two Dyck words have a least upper bound and greatest lower bound.

 

Puzzle 1: Is it true that whenever ww we can go from w to w by repeated applications of this rewrite rule:

 

)(()?
 

The set of all Dyck words is called the Dyck language. This plays a fundamental role in the study of expressions that involve parentheses. We can define Dyck languages with more than one kind of parentheses; for example,

 

([])(([])[]))
 

is a Dyck word on two kinds of parentheses. The Chomsky–-Schützenberger representation theorem characterizes context-free languages in terms of the Dyck language on two parentheses.  Returning to the Dyck language with just one kind of parenthesis, the number of Dyck words of length 2n is the nth Catalan number.  The Catalan numbers count many different things. So, there are a number of different interesting partial orders on sets that are in natural one-to-one correspondence with the set of Dyck words of length 2n. For example, the set Xn consisting of expressions built from n+1 letters using a nonassociative binary product is in natural bijection with the set of Dyck words of length n. The Tamari lattice is the set Xn ordered in such a way that ww iff w is obtained from w by repeated applications of this rewrite rule:

 

(xy)zx(yz).
 

Here x,y,z are expressions built from letters, not necessarily single letters, and the rewrite rule can be used anywhere in a larger expression. For example, we have

 

(((ab)c)d)e((ab)(cd))e.
 

Below is a picture of the Tamari lattice X4, where applications of the rewrite rule are drawn as edges going up:

 

tamari_lattice.png

Tamari Lattice – David Eppstein

 

Like the lattice at the top of this page, X4 has 14 elements. However, they are nonisomorphic as lattices. There is an n1-dimensional polytope, the associahedron, whose vertices are the elements of Xn and whose edges are applications of the rewrite rule (xy)zx(yz). The picture above shows the 3-dimensional associahedron.

 

Puzzle 2: Describe a bijection between the set of Dyck words of length 2n and the set Xn.

 

Puzzle 3: You can use your bijection and the partial order on Dyck words described earlier to put a partial order on Xn.

 

Describe this partial order explicitly.

 

For a review of various partial orders on the set of Dyck words, with references, see:

 

• J. L. Baril, J. M. Pallo, The phagocyte lattice of Dyck words, Order 23 (2006), 97–107. Tilman Piesk put the top picture on Wikicommons with a Creative Commons Attribution 3.0 Unported license. David Eppstein put his picture into the public domain on Wikicommons.

 

Visual Insight is a place to share striking images that help explain advanced topics in mathematics. I’m always looking for truly beautiful images, so if you know about one, please drop a comment here and let me know!

 

http://blogs.ams.org.../15/dyck-words/



#11
Unity

Unity

    Information Organism

  • Members
  • PipPipPipPipPipPipPipPip
  • 2,442 posts
Mathematicians Bridge Finite-Infinite Divide

A surprising new proof is helping to connect the mathematics of infinity to the physical world.

 

WylieBeckert_RamseyTheory_1K.jpg

 

By Natalie Wolchover

May 24, 2016

 

With a surprising new proof, two young mathematicians have found a bridge across the finite-infinite divide, helping at the same time to map this strange boundary.

 

The boundary does not pass between some huge finite number and the next, infinitely large one. Rather, it separates two kinds of mathematical statements: “finitistic” ones, which can be proved without invoking the concept of infinity, and “infinitistic” ones, which rest on the assumption — not evident in nature — that infinite objects exist.

 

Mapping and understanding this division is “at the heart of mathematical logic,” said Theodore Slaman, a professor of mathematics at the University of California, Berkeley. This endeavor leads directly to questions of mathematical objectivity, the meaning of infinity and the relationship between mathematics and physical reality.

 

More concretely, the new proof settles a question that has eluded top experts for two decades: the classification of a statement known as “Ramsey’s theorem for pairs,” or RT22. Whereas almost all theorems can be shown to be equivalent to one of a handful of major systems of logic — sets of starting assumptions that may or may not include infinity, and which span the finite-infinite divide — RT22 falls between these lines. “This is an extremely exceptional case,” said Ulrich Kohlenbach, a professor of mathematics at the Technical University of Darmstadt in Germany. “That’s why it’s so interesting.”

In the new proof, Keita Yokoyama, 34, a mathematician at the Japan Advanced Institute of Science and Technology, and Ludovic Patey, 27, a computer scientist from Paris Diderot University, pin down the logical strength of RT22 — but not at a level most people expected. The theorem is ostensibly a statement about infinite objects. And yet, Yokoyama and Patey found that it is “finitistically reducible”: It’s equivalent in strength to a system of logic that does not invoke infinity. This result means that the infinite apparatus in RT22 can be wielded to prove new facts in finitistic mathematics, forming a surprising bridge between the finite and the infinite. “The result of Patey and Yokoyama is indeed a breakthrough,” said Andreas Weiermann of Ghent University in Belgium, whose own work on RT22 unlocked one step of the new proof.

 

Patey_Yokoyama-615x308.jpg

 

Courtesy of Ludovic Patey and Keita Yokohama

 

Ludovic Patey, left, and Keita Yokoyama co-authored a proof giving the long-sought classification of Ramsey’s theorem for pairs.

 

Ramsey’s theorem for pairs is thought to be the most complicated statement involving infinity that is known to be finitistically reducible. It invites you to imagine having in hand an infinite set of objects, such as the set of all natural numbers. Each object in the set is paired with all other objects. You then color each pair of objects either red or blue according to some rule. (The rule might be: For any pair of numbers A < B, color the pair blue if B < 2A, and red otherwise.) When this is done, RTstates that there will exist an infinite monochromatic subset: a set consisting of infinitely many numbers, such that all the pairs they make with all other numbers are the same color. (Yokoyama, working with Slaman, is now generalizing the proof so that it holds for any number of colors.)

 

The colorable, divisible infinite sets in RT22 are abstractions that have no analogue in the real world. And yet, Yokoyama and Patey’s proof shows that mathematicians are free to use this infinite apparatus to prove statements in finitistic mathematics — including the rules of numbers and arithmetic, which arguably underlie all the math that is required in science — without fear that the resulting theorems rest upon the logically shaky notion of infinity. That’s because all the finitistic consequences of RT22 are “true” with or without infinity; they are guaranteed to be provable in some other, purely finitistic way. RT22’s infinite structures “may make the proof easier to find,” explained Slaman, “but in the end you didn’t need them. You could give a kind of native proof — a [finitistic] proof.”

 

When Yokoyama set his sights on RT22 as a postdoctoral researcher four years ago, he expected things to turn out differently. “To be honest, I thought actually it’s not finitistically reducible,” he said.

 

Ramsey_Pairs-1000.png

Lucy Reading-Ikkanda for Quanta Magazine

This was partly because earlier work proved that Ramsey’s theorem for triples, or RT32, is not finitistically reducible: When you color trios of objects in an infinite set either red or blue (according to some rule), the infinite, monochrome subset of triples that RT32 says you’ll end up with is too complex an infinity to reduce to finitistic reasoning. That is, compared to the infinity in RT22, the one in RT32 is, so to speak, more hopelessly infinite.

 

Even as mathematicians, logicians and philosophers continue to parse the subtle implications of Patey and Yokoyama’s result, it is a triumph for the “partial realization of Hilbert’s program,” an approach to infinity championed by the mathematician Stephen Simpson of Vanderbilt University. The program replaces an earlier, unachievable plan of action by the great mathematician David Hilbert, who in 1921 commanded mathematicians to weave infinity completely into the fold of finitistic mathematics. Hilbert saw finitistic reducibility as the only remedy for the skepticism then surrounding the new mathematics of the infinite. As Simpson described that era, “There were questions about whether mathematics was going into a twilight zone.”

 

The Rise of Infinity

 

The philosophy of infinity that Aristotle set out in the fourth century B.C. reigned virtually unchallenged until 150 years ago. Aristotle accepted “potential infinity” — the promise of the number line (for example) to continue forever — as a perfectly reasonable concept in mathematics. But he rejected as meaningless the notion of “actual infinity,” in the sense of a complete set consisting of infinitely many elements.

 

Aristotle’s distinction suited mathematicians’ needs until the 19th century. Before then, “mathematics was essentially computational,” said Jeremy Avigad, a philosopher and mathematician at Carnegie Mellon University. Euclid, for instance, deduced the rules for constructing triangles and bisectors — useful for bridge building — and, much later, astronomers used the tools of “analysis” to calculate the motions of the planets. Actual infinity — impossible to compute by its very nature — was of little use. But the 19th century saw a shift away from calculation toward conceptual understanding. Mathematicians started to invent (or discover) abstractions — above all, infinite sets, pioneered in the 1870s by the German mathematician Georg Cantor. “People were trying to look for ways to go further,” Avigad said. Cantor’s set theory proved to be a powerful new mathematical system. But such abstract methods were controversial. “People were saying, if you’re giving arguments that don’t tell me how to calculate, that’s not math.”

 

And, troublingly, the assumption that infinite sets exist led Cantor directly to some nonintuitive discoveries. He found that infinite sets come in an infinite cascade of sizes — a tower of infinities with no connection to physical reality. What’s more, set theory yielded proofs of theorems that were hard to swallow, such as the 1924 Banach-Tarski paradox, which says that if you break a sphere into pieces, each composed of an infinitely dense scattering of points, you can put the pieces together in a different way to create two spheres that are the same size as the original. Hilbert and his contemporaries worried: Was infinitistic mathematics consistent? Was it true?

 

Amid fears that set theory contained an actual contradiction — a proof of 0 = 1, which would invalidate the whole construct — math faced an existential crisis. The question, as Simpson frames it, was, “To what extent is mathematics actually talking about anything real? [Is it] talking about some abstract world that’s far from the real world around us? Or does mathematics ultimately have its roots in reality?”

 

Hilbert.jpg

Wikipedia

Amid questions over the consistency of infinitistic mathematics, the great German mathematician David Hilbert called upon his colleagues to prove that it rested upon solid, finitistic logical foundations.

 

Even though they questioned the value and consistency of infinitistic logic, Hilbert and his contemporaries did not wish to give up such abstractions — power tools of mathematical reasoning that in 1928 would enable the British philosopher and mathematician Frank Ramsey to chop up and color infinite sets at will. “No one shall expel us from the paradise which Cantor has created for us,” Hilbert said in a 1925 lecture. He hoped to stay in Cantor’s paradise and obtain proof that it stood on stable logical ground. Hilbert tasked mathematicians with proving that set theory and all of infinitistic mathematics is finitistically reducible, and therefore trustworthy. “We must know; we will know!” he said in a 1930 address in Königsberg — words later etched on his tomb.

 

However, the Austrian-American mathematician Kurt Gödel showed in 1931 that, in fact, we won’t. In a shocking result, Gödel proved that no system of logical axioms (or starting assumptions) can ever prove its own consistency; to prove that a system of logic is consistent, you always need another axiom outside of the system. This means there is no ultimate set of axioms — no theory of everything — in mathematics. When looking for a set of axioms that yield all true mathematical statements and never contradict themselves, you always need another axiom. Gödel’s theorem meant that Hilbert’s program was doomed: The axioms of finitistic mathematics cannot even prove their own consistency, let alone the consistency of set theory and the mathematics of the infinite.

 

This might have been less worrying if the uncertainty surrounding infinite sets could have been contained. But it soon began leaking into the realm of the finite. Mathematicians started to turn up infinitistic proofs of concrete statements about natural numbers — theorems that could conceivably find applications in physics or computer science. And this top-down reasoning continued. In 1994, Andrew Wiles used infinitistic logic to prove Fermat’s Last Theorem, the great number theory problem about which Pierre de Fermat in 1637 cryptically claimed, “I have discovered a truly marvelous proof of this, which this margin is too narrow to contain.” Can Wiles’ 150-page, infinity-riddled proof be trusted?

 

With such questions in mind, logicians like Simpson have maintained hope that Hilbert’s program can be at least partially realized. Although not all of infinitistic mathematics can be reduced to finitistic reasoning, they argue that the most important parts can be firmed up. Simpson, an adherent of Aristotle’s philosophy who has championed this cause since the 1970s (along with Harvey Friedman of Ohio State University, who first proposed it), estimates that some 85 percent of known mathematical theorems can be reduced to finitistic systems of logic. “The significance of it,” he said, “is that our mathematics is thereby connected, via finitistic reducibility, to the real world.”

 

An Exceptional Case

 

Almost all of the thousands of theorems studied by Simpson and his followers over the past four decades have turned out (somewhat mysteriously) to be reducible to one of five systems of logic spanning both sides of the finite-infinite divide. For instance, Ramsey’s theorem for triples (and all ordered sets with more than three elements) was shown in 1972 to belong at the third level up in the hierarchy, which is infinitistic. “We understood the patterns very clearly,” said Henry Towsner, a mathematician at the University of Pennsylvania. “But people looked at Ramsey’s theorem for pairs, and it blew all that out of the water.”

 

A breakthrough came in 1995, when the British logician David Seetapun, working with Slaman at Berkeley, proved that RT22 is logically weaker than RT32 and thus below the third level in the hierarchy. The breaking point between RT22 and RT32 comes about because a more complicated coloring procedure is required to construct infinite monochromatic sets of triples than infinite monochromatic sets of pairs.

 

Ramsey_Triples-615.png
 

Lucy Reading-Ikkanda for Quanta Magazine

 

“Since then, many seminal papers regarding RT22 have been published,” said Weiermann — most importantly, a 2012 result by Jiayi Liu (paired with a result by Carl Jockusch from the 1960s) showed that RT22 cannot prove, nor be proved by, the logical system located at the second level in the hierarchy, one rung below RT32. The level-two system is known to be finitistically reducible to “primitive recursive arithmetic,” a set of axioms widely considered the strongest finitistic system of logic. The question was whether RT22 would also be reducible to primitive recursive arithmetic, despite not belonging at the second level in the hierarchy, or whether it required stronger, infinitistic axioms. “A final classification of RT22 seemed out of reach,” Weiermann said.

 

But then in January, Patey and Yokoyama, young guns who have been shaking up the field with their combined expertise in computability theory and proof theory, respectively, announced their new result at a conference in Singapore. Using a raft of techniques, they showed that RT22 is indeed equal in logical strength to primitive recursive arithmetic, and therefore finitistically reducible.

 

“Everybody was asking them, ‘What did you do, what did you do?’” said Towsner, who has also worked on the classification of RT22 but said that “like everyone else, I did not get far.” “Yokoyama is a very humble guy. He said, ‘Well, we didn’t do anything new; all we did was, we used the method of indicators, and we used this other technique,’ and he proceeded to list off essentially every technique anyone has ever developed for working on this sort of problem.”

 

In one key step, the duo modeled the infinite monochromatic set of pairs in RT22 using a finite set whose elements are “nonstandard” models of the natural numbers. This enabled Patey and Yokoyama to translate the question of the strength of RT22 into the size of the finite set in their model. “We directly calculate the size of the finite set,” Yokoyama said, “and if it is large enough, then we can say it’s not finitistically reducible, and if it’s small enough, we can say it is finitistically reducible.” It was small enough.

 

RT22 has numerous finitistic consequences, statements about natural numbers that are now known to be expressible in primitive recursive arithmetic, and which are thus certain to be logically consistent. Moreover, these statements — which can often be cast in the form “for every number X, there exists another number Y such that … ” — are now guaranteed to have primitive recursive algorithms associated with them for computing Y. “This is a more applied reading of the new result,” said Kohlenbach. In particular, he said, RT22 could yield new bounds on algorithms for “term rewriting,” placing an upper limit on the number of times outputs of computations can be further simplified.

 

Some mathematicians hope that other infinitistic proofs can be recast in the RT22 language and shown to be logically consistent. A far-fetched example is Wiles’ proof of Fermat’s Last Theorem, seen as a holy grail by researchers like Simpson. “If someone were to discover a proof of Fermat’s theorem which is finitistic except for involving some clever applications of RT22,” he said, “then the result of Patey and Yokoyama would tell us how to find a purely finitistic proof of the same theorem.”

 

Simpson considers the colorable, divisible infinite sets in RT22 “convenient fictions” that can reveal new truths about concrete mathematics. But, one might wonder, can a fiction ever be so convenient that it can be thought of as a fact? Does finitistic reducibility lend any “reality” to infinite objects — to actual infinity? There is no consensus among the experts. Avigad is of two minds. Ultimately, he says, there is no need to decide. “There’s this ongoing tension between the idealization and the concrete realizations, and we want both,” he said. “I’m happy to take mathematics at face value and say, look, infinite sets exist insofar as we know how to reason about them. And they play an important role in our mathematics. But at the same time, I think it’s useful to think about, well, how exactly do they play a role? And what is the connection?”

With discoveries like the finitistic reducibility of RT22 — the longest bridge yet between the finite and the infinite — mathematicians and philosophers are gradually moving toward answers to these questions. But the journey has lasted thousands of years already, and seems unlikely to end anytime soon. If anything, with results like RT22, Slaman said, “the picture has gotten quite complicated.”

 

This article was reprinted on Wired.com.

 

https://www.quantama...nfinite-divide/



#12
Unity

Unity

    Information Organism

  • Members
  • PipPipPipPipPipPipPipPip
  • 2,442 posts
In Mysterious Pattern, Math and Nature Converge

 

Cuernavaca_Morelos_Mexico2.jpg

 

In Cuernavaca, Mexico, a "spy" network makes the decentralized bus system more efficient. As a consequence, the departure times of buses exhibit a ubiquitous pattern known as “universality.” 

 

By Natalie Wolchover

 

February 5, 2013

 

In 1999, while sitting at a bus stop in Cuernavaca, Mexico, a Czech physicist named Petr Šeba noticed young men handing slips of paper to the bus drivers in exchange for cash. It wasn’t organized crime, he learned, but another shadow trade: Each driver paid a “spy” to record when the bus ahead of his had departed the stop. If it had left recently, he would slow down, letting passengers accumulate at the next stop. If it had departed long ago, he sped up to keep other buses from passing him. This system maximized profits for the drivers. And it gave Šeba an idea.

 

“We felt here some kind of similarity with quantum chaotic systems,” explained Šeba’s co-author, Milan Krbálek, in an email.

After several failed attempts to talk to the spies himself, Šeba asked his student to explain to them that he wasn’t a tax collector, or a criminal — he was simply a “crazy” scientist willing to trade tequila for their data. The men handed over their used papers. When the researchers plotted thousands of bus departure times on a computer, their suspicions were confirmed: The interaction between drivers caused the spacing between departures to exhibit a distinctive pattern previously observed in quantum physics experiments.

 

“I was thinking that something like this could come out, but I was really surprised that it comes exactly,” Šeba said.

Subatomic particles have little to do with decentralized bus systems. But in the years since the odd coupling was discovered, the same pattern has turned up in other unrelated settings. Scientists now believe the widespread phenomenon, known as “universality,” stems from an underlying connection to mathematics, and it is helping them to model complex systems from the Internet to Earth’s climate.

 

spectralpatterns-300x247.png

 

Illustration by Simons Science News

 

The red pattern exhibits a precise balance of randomness and regularity known as “universality,” which has been observed in the spectra of many complex, correlated systems. In this spectrum, a mathematical formula called the “correlation function” gives the exact probability of finding two lines spaced a given distance apart.

 

The pattern was first discovered in nature in the 1950s in the energy spectrum of the uranium nucleus, a behemoth with hundreds of moving parts that quivers and stretches in infinitely many ways, producing an endless sequence of energy levels. In 1972, the number theorist Hugh Montgomery observed it in the zeros of the Riemann zeta function, a mathematical object closely related to the distribution of prime numbers. In 2000, Krbálek and Šeba reported it in the Cuernavaca bus system. And in recent years it has shown up in spectral measurements of composite materials, such as sea ice and human bones, and in signal dynamics of the Erdös–Rényi model, a simplified version of the Internet named for Paul Erdös and Alfréd Rényi.

 

Each of these systems has a spectrum — a sequence like a bar code representing data such as energy levels, zeta zeros, bus departure times or signal speeds. In all the spectra, the same distinctive pattern appears: The data seem haphazardly distributed, and yet neighboring lines repel one another, lending a degree of regularity to their spacing. This fine balance between chaos and order, which is defined by a precise formula, also appears in a purely mathematical setting: It defines the spacing between the eigenvalues, or solutions, of a vast matrix filled with random numbers.

 

“Why so many physical systems behave like random matrices is still a mystery,” said Horng-Tzer Yau, a mathematician at Harvard University. “But in the past three years, we have made a very important step in our understanding.”

 

By investigating the “universality” phenomenon in random matrices, researchers have developed a better sense of why it arises elsewhere — and how it can be used. In a flurry of recent papers, Yau and other mathematicians have characterized many new types of random matrices, which can conform to a variety of numerical distributions and symmetry rules. For example, the numbers filling a matrix’s rows and columns might be chosen from a bell curve of possible values, or they might simply be 1s and –1s. The top right and bottom left halves of the matrix might be mirror images of one another, or not. Time and again, regardless of their specific characteristics, the random matrices are found to exhibit that same chaotic yet regular pattern in the distribution of their eigenvalues. That’s why mathematicians call the phenomenon “universality.”

“It seems to be a law of nature,” said Van Vu, a mathematician at Yale University who, with Terence Tao of the University of California, Los Angeles, has proven universality for a broad class of random matrices.

 

Universality is thought to arise when a system is very complex, consisting of many parts that strongly interact with each other to generate a spectrum. The pattern emerges in the spectrum of a random matrix, for example, because the matrix elements all enter into the calculation of that spectrum. But random matrices are merely “toy systems” that are of interest because they can be rigorously studied, while also being rich enough to model real-world systems, Vu said. Universality is much more widespread. Wigner’s hypothesis (named after Eugene Wigner, the physicist who discovered universality in atomic spectra) asserts that all complex, correlated systems exhibit universality, from a crystal lattice to the Internet.

 

The more complex a system is, the more robust its universality should be, said László Erdös of the University of Munich, one of Yau’s collaborators. “This is because we believe that universality is the typical behavior.”

 

In many simple systems, individual components can assert too great an influence on the outcome of the system, changing the spectral pattern. With larger systems, no single component dominates. “It’s like if you have a room with a lot of people and they decide to do something, the personality of one person isn’t that important,” Vu said.

 

Internet_map_1024-300x300.jpg

 

Illustration by Matt Britt

 

Mathematicians are using random matrix models to study and predict some of the Internet’s properties, such as the size of typical computer clusters. 

 

Whenever a system exhibits universality, the behavior acts as a signature certifying that the system is complex and correlated enough to be treated like a random matrix. “This means you can use a random matrix to model it,” Vu said. “You can compute other parameters of the matrix model and use them to predict that the system may behave like the parameters you computed.”

 

This technique is enabling scientists to understand the structure and evolution of the Internet. Certain properties of this vast computer network, such as the typical size of a cluster of computers, can be closely estimated by measurable properties of the corresponding random matrix. “People are very interested in clusters and their locations, partially motivated by practical purposes such as advertising,” Vu said.

 

A similar technique may lead to improvements in climate change models. Scientists have found that the presence of universality in features similar to the energy spectrum of a material indicates that its components are highly connected, and that it will therefore conduct fluids, electricity or heat. Conversely, the absence of universality may show that a material is sparse and acts as an insulator. In new work presented in January at the Joint Mathematics Meetings in San Diego, Calif., Ken Golden, a mathematician at the University of Utah, and his student, Ben Murphy, used this distinction to predict heat transfer and fluid flow in sea ice, both at the microscopic level and through patchworks of Arctic melt ponds spanning thousands of kilometers.

 

The spectral measure of a mosaic of melt ponds, taken from a helicopter, or a similar measurement taken of a sample of sea ice in an ice core, instantly exposes the state of either system. “Fluid flow through sea ice governs or mediates very important processes that you need to understand in order to understand the climate system,” Golden said. “The transitions in the eigenvalue statistics presents a brand new, mathematically rigorous approach to incorporating sea ice into climate models.”

 

The same trick may also eventually provide an easy test for osteoporosis. Golden, Murphy and their colleagues have found that the spectrum of a dense, healthy bone exhibits universality, while that of a porous, osteoporotic bone does not.

 

Melt_Ponds_Connected-300x199.jpg

 

Don Perovich

 

When Arctic melt ponds are sufficiently connected, as pictured here, they exhibit a property called universality that researchers believe is common to all complex, correlated systems.

 

“We’re dealing with systems where the ‘particles’ can be on the millimeter or even on the kilometer scale,” Murphy said, referring to the systems’ component parts. “It’s amazing that the same underlying mathematics describes both.”

The reason a real-world system would exhibit the same spectral behavior as a random matrix may be easiest to understand in the case of the nucleus of a heavy atom. All quantum systems, including atoms, are governed by the rules of mathematics, and specifically by those of matrices. “That’s what quantum mechanics is all about,” said Freeman Dyson, a retired mathematical physicist who helped develop random matrix theory in the 1960s and 1970s while at Princeton’s Institute for Advanced Study. “Every quantum system is governed by a matrix representing the total energy of the system, and the eigenvalues of the matrix are the energy levels of the quantum system.”

 

The matrices behind simple atoms, such as hydrogen or helium, can be worked out exactly, yielding eigenvalues that correspond with stunning precision to the measured energy levels of the atoms. But the matrices corresponding to more complex quantum systems, such as a uranium nucleus, quickly grow too thorny to grasp. According to Dyson, this is why such nuclei can be compared to random matrices. Many of the interactions inside uranium — the elements of its unknown matrix — are so complex that they become washed out, like a mélange of sounds blending into noise. Consequently, the unknown matrix that governs the nucleus behaves like a matrix filled with random numbers, and so its spectrum exhibits universality.

 

Melt_Ponds_Disconnected-300x200.jpg

 

Don Perovich

 

These disconnected Arctic melt ponds do not form a system that is correlated enough to exhibit universality. Instead, the energy spectrum of the system is random.

 

Scientists have yet to develop an intuitive understanding of why this particular random-yet-regular pattern, and not some other pattern, emerges for complex systems. “We only know it from calculations,” Vu said. Another mystery is what it has to do with the Riemann zeta function, whose spectrum of zeros exhibits universality. The zeros of the zeta function are closely tied to the distribution of the prime numbers — the irreducible integers out of which all others are constructed.

 

Mathematicians have long wondered at the haphazard way in which the primes are sprinkled along the number line from one to infinity, and universality offers a clue. Some think there may be a matrix underlying the Riemann zeta function that is complex and correlated enough to exhibit universality. Discovering such a matrix would have “big implications” for finally understanding the distribution of the primes, said Paul Bourgade, a mathematician at Harvard.

 

Or perhaps the explanation lies deeper still. “It may happen that it is not a matrix that lies at the core of both Wigner’s universality and the zeta function, but some other, yet undiscovered, mathematical structure,” Erdös said. “Wigner matrices and zeta functions may then just be different representations of this structure.”

 

Many mathematicians are searching for the answer, with no guarantee that there is one. “Nobody imagined that the buses in Cuernavaca would turn out to be an example of this. Nobody imagined that the zeroes of the zeta function would be another example,” Dyson said. “The beauty of science is it’s completely unpredictable, and so everything useful comes out of surprises.”

 

This article was reprinted on Wired.com.

 

https://www.quantama...ature-converge/



#13
Unity

Unity

    Information Organism

  • Members
  • PipPipPipPipPipPipPipPip
  • 2,442 posts

On the periodic table of math: the L-functions and Modular Forms Database, by Rachel Crowell

 

 

md-201605-lmfdb.jpg

 

In this article the L-functions and Modular Forms Database (LMFDB) is compared to the first periodic table of the elements by John Voight, an associate professor at Dartmouth College. Anna Haensch, assistant professor of mathematics at Duquesne University said (in a PR Newswire press release), "Like the public DNA database, the LMFDB lets us peek, for the first time, into the relationships of different mathematical items and trace their common ancestry...Another way to think about this is a huge Facebook for mathematical items. We can see what forms are friendly with other items--and can investigate these possibilities." [Anna is a former Digest writer and current blogger on the AMS Blog on Math Blogs.]

 

Haensch and Voight were on the international team that constructed the database containing data supplied by more than 80 mathematicians from 12 countries. The database catalogs millions of mathematical objects and relationships between them. The LMFDB allows mathematicians around the world to conserve valuable resources--time and brain power--that would be spent if they had to do calculations that are available in the database. Researchers believe the database will be helpful to mathematicians who are working to solve problems in pure mathematics, such as the Riemann hypothesis. They also think scientists could use the database to learn about relationships between mathematical objects and apply this knowledge to design better data encryption systems, including those used in cloud storage. Image: Graph of zeros of Riemann-zeta function along its critical line, from lmfdb.org.

 

See "A new way to explore the mathematical universe," by Viviane Richter, Cosmos Magazine, 11 May 2016

 

http://www.ams.org/n...ex#201605-lmfdb



#14
Unity

Unity

    Information Organism

  • Members
  • PipPipPipPipPipPipPipPip
  • 2,442 posts

How the hidden mathematics of living cells could help us decipher the brain

 

image-20160516-15930-5nnsna.jpg

 

Given how much they can actually do, computers have a surprisingly simple basis. Indeed, the logic they use has worked so well that we have even started to think of them as analogous to the human brain. Current computers basically use two basic values – 0 (false) and 1 (true) – and apply simple operations like “and”, “or” and “not” to compute with them. These operations can be combined and scaled up to represent virtually any computation.

 

This “binary "or "Boolean” logic was introduced by George Boole in 1854 to describe what he called “the laws of thought”. But the brain is far from a binary logic device. And while programmes such as the Human Brain Project seek to model the brain using computers, the notion of what computers are is also constantly changing.

 

So will we ever be able to model something as complex as the human brain using computers? After all, biological systems use symmetry and interaction to do things that even the most powerful computers cannot do – like surviving, adapting and reproducing. This is one reason why binary logic often falls short of describing how living things or human intelligence work. But our new research suggests there are alternatives: by using the mathematics that describe biological networks in the computers of the future, we may be able to make them more complex and similar to living systems like the brain.

 

Living organisms do not live in a world of zeroes and ones. And if binary logic doesn’t naturally describe their activity, what kind of mathematics does? I was involved in an international project which studied whether mathematical structures called “Simple Non-Abelian Groups“ (SNAGs) may describe complex processes in living cells. SNAGs are commonly in mathematics and physics, and are based on the principles of symmetry and interaction. SNAGs offer a potentially powerful alternative to binary logic for computation.

 

Helpful SNAGs

 

There are infinitely many kinds of SNAGs. They were conjured by the brilliant 19th-century French mathematician Évariste Galois, who tragically died aged 20 in a fatal duel over a romantic interest. Indeed, he wrote much of his ground-breaking theory during a feverish night before the duel.

 

The smallest SNAG – A5 – describes the symmetries of two beautiful 3D shapes known since the time of the ancient Greeks: the icosahedron (made of 20 triangles) and the dodecahedron (made of 12 pentagons). SNAGs can be thought of as the “multiplication tables” of how symmetries interact, rather than for how to multiply numbers.

 

Dodecahedron.gifIcosahedron.gif

Dodecahedron and Icosahedron (Platonic Solids):

3D shapes with SNAG symmetry

 

 

Unlike the ones and zeros used in binary logic with just two values, the SNAG for each of these shapes have 60 values – or “symmetries”. These symmetries operate like rotations that can be combined. Performing a rotation and following it with a second can have the same effect as another kind of rotation, giving a kind of “multiplication table” for these 60 symmetries. For example, if you rotate the icosahedron (the figure below) five times by 72 degrees clockwise around the axis through its centre and any vertex (corner) it will get back to the starting configuration.

 

The structure of SNAGs is a natural kind of basis for computation that is just as powerful as binary logic, but presents a very different view about which computations are easy. To compute with SNAGs, nature (or humans or future computers) can use sequences of SNAG symmetries combined according to the rules. Patterns of events and interactions determine which symmetries occur in the sequence’s variable positions.

 

Symmetries in nature

 

We have for the first time shown that there are SNAGs hidden in common biological networks. To do this, we analysed the internal workings of cells (their gene regulation and metabolism) using mathematics, computers and models from systems biology. We found that SNAG symmetries accurately describe potential activities in the genetic regulatory network that controls a cell’s response to certain kinds of stress – such as radiation and DNA damage. This may be hugely important as it means SNAGs can describe cellular processes intimately involved in self-repair, “cell suicide”, and cancer.

 

image-20160520-4463-51xv6t.jpgMultiphoton fluorescence image of so-called HeLa cells using a laser scanning microscope. NIH

 

The specific SNAG involved in this gene network is A5. The 60 symmetries in this case are the result of particular sequences of manipulations by the cell’s genetic regulatory network to transform ensembles of proteins into other forms. For example, when a set of five concentration levels of proteins is manipulated, it can be transformed to another set. When this is done many times, it can break some of the proteins down, join some together or synthesise new types of proteins. But after a specific number of manipulations the original five concentration levels of proteins will eventually return.

 

It doesn’t stop at cellular damage control processes. We have also shown mathematically that nearly all biological reaction networks must have numerous embedded SNAG components. However, lab work is still needed to explain how and to what extent cells exploit SNAGs in their activity.

 

Computation with SNAGs has never yet been exploited in conventional computers, but we are hoping to use it. In the future, new kinds of computers and software systems may deploy resources the way some living organisms do, in robust adaptive responses. Driven by interaction with their environment, including human users, they could grow new structures, divide up tasks among different types of computational “cells” such as hardware units or software processes, allow old structures to wither and be reabsorbed if unused.

 

Understanding how living things and brains use interaction-based computations, which are all around us, may radically reshape not only our computers and the internet, but the existing models of the brain and living organisms. SNAG-based computations may finally help us build better and more predictive working models of cells and of the brain. But we have only sighted the first examples, and so have a long way to go. After all, as Shakespeare and this discovery of SNAG-computation in cells remind us: “There are more things in heaven and earth, Horatio, than are dreamt of in your philosophy.”

 

https://theconversat...the-brain-59483



#15
Unity

Unity

    Information Organism

  • Members
  • PipPipPipPipPipPipPipPip
  • 2,442 posts

How Euclid once ruled the world

 

by Judith Grabiner

 

Submitted by mf344 on May 9, 2016
 

euclid.jpg

 

Euclid, as depicted by Justus van Gent in the 15th century.

 

Geometry is obviously a very useful area of maths. We need it to measure things, to understand shapes, and to navigate through the spaces we live in. But I'd like to argue that geometry is much more than that: it interacts with all aspects of human thought and life. To start, let's turn to a person who is universally known as the "father of geometry": the ancient Greek mathematician Euclid. Euclid's work is the earliest example we have of a systematic approach to geometry. When you make a general statement in geometry, such as Pythagoras' theorem, you should prove this statement by deriving it from statements you're convinced are self-evident, using the rules of logic. For 2000 years Euclid's systematic approach seemed to prove truths about geometrical objects, and thereby to achieve certainty.

 

Euclidean rigour

 

Many important later thinkers believed that other subjects might come to share the certainty of geometry if only they followed the same method. René Descartes, for example, said that if we start with self-evident truths (also called axioms) and then proceed by logically deducing more and more complex truths from these, then there's nothing we couldn't come to know. The philosopher Benedict Spinoza even wrote an Ethics Demonstrated in Geometrical Order, with explicitly labelled axioms and definitions. He proclaimed to prove the existence of God, closing his proof with a QED just as mathematicians do.

This article is based on Grabiner's talk at the Ada Lovelace Symposium. You can watch a video of the talk below.
 

In science, Isaac Newton's famous work Principia Mathematica clearly demonstrates Euclid's influence. Newton called his famous laws of motion "axioms" and deduced his law of gravitation in the form of two mathematical theorems. As Newton famously wrote, "it's the glory of geometry that from so few principles it can accomplish so much." And here's one more example of Euclid's influence. The American Declaration of Independence is designed to inspire faith in its certainty by using the Euclidean form. Thomas Jefferson, who knew more of the mathematics of his time than any other American president, began his argument, "We hold these truths to be self-evident: that all men are created equal." There are also other self-evident truths in the document, he uses the word "prove", and the actual declaration that founded the United States is stated explicitly as the conclusion of a logical argument, beginning with a "therefore": "We, therefore ... declare, that these United Colonies are, and of right ought to be, free and independent states".

 

So in philosophy, theology, science, and politics, the idealised Euclidean model of reasoning has shaped conceptions of proof, truth, and certainty.

 

Euclid's first four postulates
  1. A straight line can be drawn from any point to any other point.
  2. A finite straight line can be extended as long as desired.
  3. A circle can be constructed with any point as its centre and with any length as its radius.
  4. All right angles are equal to one another.

Euclid's postulates

 

Before we look into the influence of Euclid's geometry, let's have a look at the assumptions, or postulates, he built this geometry on. The first four are shown in the box on the right. They are straightforward and nobody in their right mind would doubt them. But there is also a fifth one, called the parallel postulate:

 

If a straight line that falls on two straight lines makes the interior angles on the same side add up to less than two right angles, then the two straight lines, if produced indefinitely, meet on that side.

 

Confused? Here's a picture to understand it:

fifth.png

The postulate says that if angle A plus angle B add to less than two right angles, then the green lines must meet on the same side as the angles are located.

 

I sometimes ask my students to vote on whether this postulate is self-evident, and they overwhelmingly agree that it is not — you've got to draw a picture to even understand it. But if it is not self-evident, then maybe it should not be taken as given, but be proved from the other postulates. The Greeks tried to do this but they failed. So did medieval Islamic and Jewish mathematicians, and those in 17th- and 18th-century Europe.

 

What the Greeks did manage to prove, however, is that the 5th postulate is logically equivalent to the uniqueness of parallel lines: given a line L and a point P, there is only one line through P that is parallel to L and lies in the same plane as L.

parallels.png

Euclid and physics

 

Euclid never talked about the space his geometric shapes live in, but it appears that he implicitly assumed it to be an infinite expanse that's the same in all directions and in which every point is just like every other point.

Later thinkers, especially starting in the Renaissance, talked a lot about space. And they agreed with these earlier assumptions. The idea that space should be like that comes from the principle of sufficient reason, which seems rather obvious at first glance:

 

For everything that is, there's a reason why it must be as it is and not otherwise.

 

fulcrum.jpg

 

A lever with equal weights at equal distances from the fulcrum must balance.

 

The principle is at least as old as Archimedes, and it allows us to explain things in the world around us. Here's an example: How do we know that a lever with equal weights at equal distances from the fulcrum must balance? Well, why not? There is no reason for the lever to go down on either side: so it must go down on neither; therefore it balances.

 

The greatest advocate of sufficient reason was the 17th century mathematician Gottfried Wilhelm Leibniz, who believed that God used the principle, along with the laws of logic, in making the universe. And since God made it rationally, we humans can figure it out.

 

A striking example of how the universe is transparent to human reason is the discovery of Newton's first Law of Motion, found fifty years before Newton independently by Descartes and Pierre Gassendi. Here's how they figured it out. The law says that a body in motion with no forces acting on it continues in a straight line at a constant speed. Why? The body continues in a straight line because it has no reason to turn away in some other direction, all the directions are the same. It goes at a constant speed because it has no reason to speed up or slow down: all the points in space are the same, so there's no reason the body should prefer being at one point to another. Similar arguments make a body at rest with no forces acting on it stay right where it is.

 

vector_parallelogram.png

 

A parallelogram of forces. The two solid blue arrows represent two forces acting on a particle that sits at their tails. The lengths of the arrows represent the velocities the two forces could produce in the particle by acting on it for a unit of time, and their directions indicate the directions of the forces. The red arrow indicates the net effect of both forces acting together. Image: Brews ohare, CC BY-SA 3.0.

 

Sufficient reason is a powerful principle, and it seems to suggest that the space we live in is just like the space of Euclid's geometry. It's no surprise, then, that 17th and 18th century thinking was Euclidean through and through. Newton's physics, for example, implicitly relied on Euclid's 5th postulate. It needed those parallelograms of forces you might have met at school. Proving the properties of parallelograms requires Euclid's theory of parallels and thus the 5th postulate.

 

This is why mathematicians of the 18th century cared so much about proving the 5th postulate. It was hugely important; not just geometry, but all of science rested on it. An interesting example comes from the mathematician Joseph-Louis Lagrange: so impressed was he with the power of sufficient reason that he tried to use it to prove the 5th postulate. His argument was flawed, but it's a significant fact that such an important mathematician was willing to get up in public and link space being Euclidean with the principle of sufficient reason.

 

Euclid and philosophy

 

Philosophy was equally permeated by Euclid's ideas. A super-influential philosopher, Immanuel Kant, said that space is something that exists in our minds, and we each have the same unique "space" in our minds. And it turns out that, for Kant, this space had to be Euclidean.

 

To argue that we can come to know complex truths about non-material things, Kant used Euclid's proof that the sum of the angles of a triangle is two right angles. The proof uses geometrical constructions. Where do you make these constructions? Not on paper — geometry isn't about physical triangles or lines. You make them, Kant says, in the space within your mind.

It turns out that Euclid's proof requires the 5th postulate. So this theorem about the sum of the angles requires space to be Euclidean. Kant doesn't say this, but he does say there's only one space. So for Kant no alternative to Euclid seems conceivable.

 

Another philosopher witness to Euclidean space as truth was Voltaire. He shared the widespread 18th-century idea that universal agreement was a marker for truth. And he said, "There are no sects in geometry. One doesn't say 'I'm a Euclidean, I'm an Archimedean.' Demonstrate the truth and the whole world will be of your opinion." Mathematics exemplifies this, for Voltaire, and ethics should too! Voltaire wrote, "There is but one morality, as there is but one geometry." (And one of my students shot back, "Wrong on both points, Voltaire!" We will see why in the second part of this article.)

Euclid in art and architecture

 

The art and architecture of the early modern period also reflect the Euclidean idea of space. Here's the first important perspective painting of the Renaissance: the Trinity by Masaccio.

 

trinity.jpg

Trinity by Masaccio.

 

Now we are used to two-dimensional pictures that look three-dimensional because we have photography and television and iPhones and so on. In the Renaissance, they didn't have those things. So a painting like this was incredibly exciting to them. And this realistic illusion of depth in Renaissance art comes explicitly from Euclidean geometry.

 

It's useful to look at a medieval work of art to appreciate the difference between medieval and Renaissance art:

 

bayeux.jpg

A piece of the Bayeux Tapestry.

 

These people look as big as the castle! It's wonderful art, but there is no convincing three-dimensionality.

In the Renaissance, though, we're in a different kind of space:

 

ideal_city.jpg

 

The ideal city attributed to Luciano Laurane or Melozzo da Forlì.

 

The geometry used in creating Renaissance art is literally Euclidean: results from Euclid's Elements of Geometry and from Euclid's Optics are absolutely essential to the theory of perspective used by artists, and they presuppose Euclid's 5th postulate. (To see how Euclid's geometry lets us "see" three-dimensional reality on a flat canvas, see Getting into the picture.)

 

Architecture also teaches us to see our world as Euclidean. Right now you are probably in a room that is full of parallel lines, walls that are everywhere equidistant and make equal right angles with the floor, all with properties Euclid proved using his 5th postulate. This is the kind of room you would design if you wanted to brainwash people to believe that space is Euclidean.

 

So, that's the 18th-century world, the world of sufficient reason: symmetric, balanced, based on self-evident and necessary truths, embedded in Euclidean space. We can figure it all out rationally by ourselves.

 

Euclid's geometry is the universally agreed upon model of perfect intellectual authority. Or is it? In the second article we will blow all of this out of the water.

 

https://plus.maths.o...nce-rules-world



#16
Unity

Unity

    Information Organism

  • Members
  • PipPipPipPipPipPipPipPip
  • 2,442 posts

‘Outsiders’ Crack 50-Year-Old Math Problem

 

Three computer scientists have solved a problem central to a dozen far-flung mathematical fields.

 

In 2008, Daniel Spielman told his Yale University colleague Gil Kalai about a computer science problem he was working on, concerning how to “sparsify” a network so that it has fewer connections between nodes but still preserves the essential features of the original network.

 

Network sparsification has applications in data compression and efficient computation, but Spielman’s particular problem suggested something different to Kalai. It seemed connected to the famous Kadison-Singer problem, a question about the foundations of quantum physics that had remained unsolved for almost 50 years.

Over the decades, the Kadison-Singer problem had wormed its way into a dozen distant areas of mathematics and engineering, but no one seemed to be able to crack it. The question “defied the best efforts of some of the most talented mathematicians of the last 50 years,” wrote Peter Casazza and Janet Tremain of the University of Missouri in Columbia, in a 2014 survey article.

 

As a computer scientist, Spielman knew little of quantum mechanics or the Kadison-Singer problem’s allied mathematical field, called C*-algebras. But when Kalai, whose main institution is the Hebrew University of Jerusalem, described one of the problem’s many equivalent formulations, Spielman realized that he himself might be in the perfect position to solve it. “It seemed so natural, so central to the kinds of things I think about,” he said. “I thought, ‘I’ve got to be able to prove that.’” He guessed that the problem might take him a few weeks.

 

SrivastavaMarcusSpielman.jpeg

Courtesy of Adam Marcus

 

Three computer scientists (from left), Nikhil Srivastava, Adam Marcus, and Daniel Spielman, solved the famous Kadison-Singer problem in 2013.

 

Instead, it took him five years. In 2013, working with his postdoc Adam Marcus, now at Princeton University, and his graduate student Nikhil Srivastava, now at the University of California, Berkeley, Spielman finally succeeded. Word spread quickly through the mathematics community that one of the paramount problems in C*-algebras and a host of other fields had been solved by three outsiders — computer scientists who had barely a nodding acquaintance with the disciplines at the heart of the problem.

 

Mathematicians in these disciplines greeted the news with a combination of delight and hand-wringing. The solution, which Casazza and Tremain called “a major achievement of our time,” defied expectations about how the problem would be solved and seemed bafflingly foreign. Over the past two years, the experts in the Kadison-Singer problem have had to work hard to assimilate the ideas of the proof. Spielman, Marcus and Srivastava “brought a bunch of tools into this problem that none of us had ever heard of,” Casazza said. “A lot of us loved this problem and were dying to see it solved, and we had a lot of trouble understanding how they solved it.”

 

“The people who have the deep intuition about why these methods work are not the people who have been working on these problems for a long time,” said Terence Tao, of the University of California, Los Angeles, who has been following these developments. Mathematicians have held several workshops to unite these disparate camps, but the proof may take several more years to digest, Tao said. “We don’t have the manual for this magic tool yet.”

 

Computer scientists, however, have been quick to exploit the new techniques. Last year, for instance, two researchers parlayed these tools into a major leap forward in understanding the famously difficult traveling salesman problem. There are certain to be more such advances, said Assaf Naor, a mathematician at Princeton who works in areas related to the Kadison-Singer problem. “This is too profound to not have many more applications.”

 

A Common Problem

 

The question Richard Kadison and Isadore Singer posed in 1959 asks how much it is possible to learn about a “state” of a quantum system if you have complete information about that state in a special subsystem. Inspired by an informally worded comment by the legendary physicist Paul Dirac, their question builds on Werner Heisenberg’s uncertainty principle, which says that certain pairs of attributes, like the position and the momentum of a particle, cannot simultaneously be measured to arbitrary precision.

 

Kadison and Singer wondered about subsystems that contain as many different attributes (or “observables”) as can compatibly be measured at the same time. If you have complete knowledge of the state of such a subsystem, they asked, can you deduce the state of the entire system?

 

KadisonAndSinger.jpg

 

Left: Konrad Jacobs, Archives of the Mathematisches Forschungsinstitut Oberwolfach; Right: Courtesy of Isadore Singer

 

Richard Kadison (left), pictured at the 1970 International Congress of Mathematicians in Nice, France, and Isadore Singer posed a mathematics problem in 1959 that stood unsolved for more than 50 years.

 

In the case where the system you’re measuring is a particle that can move along a continuous line, Kadison and Singer showed that the answer is no: There can be many different quantum states that all look the same from the point of view of the observables you can simultaneously measure. “It is as if many different particles have exactly the same location simultaneously — in a sense, they are in parallel universes,” Kadison wrote by email, although he cautioned that it’s not yet clear whether such states can be realized physically.

 

Kadison and Singer’s result didn’t say what would happen if the space in which the particle lives is not a continuous line, but is instead some choppier version of the line — if space is “granular,” as Kadison put it. This is the question that came to be known as the Kadison-Singer problem.

 

Based on their work in the continuous setting, Kadison and Singer guessed that in this new setting the answer would again be that there are parallel universes. But they didn’t go so far as to state their guess as a conjecture — a wise move, in hindsight, since their gut instinct turned out to be wrong. “I’m happy I’ve been careful,” Kadison said.

 

Kadison and Singer — now at the University of Pennsylvania and the Massachusetts Institute of Technology (emeritus), respectively — posed their question at a moment when interest in the philosophical foundations of quantum mechanics was entering a renaissance. Although some physicists were promoting a “shut up and calculate” approach to the discipline, other, more mathematically inclined physicists pounced on the Kadison-Singer problem, which they understood as a question about C*-algebras, abstract structures that capture the algebraic properties not just of quantum systems but also of the random variables used in probability theory, the blocks of numbers called matrices, and regular numbers.

C*-algebras are an esoteric subject — “the most abstract nonsense that exists in mathematics,” in Casazza’s words. “Nobody outside the area knows much about it.” For the first two decades of the Kadison-Singer problem’s existence, it remained ensconced in this impenetrable realm.

 

Then in 1979, Joel Anderson, now an emeritus professor at Pennsylvania State University, popularized the problem by proving that it is equivalent to an easily stated question about when matrices can be broken down into simpler chunks. Matrices are the core objects in linear algebra, which is used to study mathematical phenomena whose behavior can be captured by lines, planes and higher-dimensional spaces. So suddenly, the Kadison-Singer problem was everywhere. Over the decades that followed, it emerged as the key problem in one field after another.

 

Because there tended to be scant interaction between these disparate fields, no one realized just how ubiquitous the Kadison-Singer problem had become until Casazza found that it was equivalent to the most important problem in his own area of signal processing. The problem concerned whether the processing of a signal can be broken down into smaller, simpler parts. Casazza dived into the Kadison-Singer problem, and in 2005, he, Tremain and two co-authors wrote a paper demonstrating that it was equivalent to the biggest unsolved problems in a dozen areas of math and engineering. A solution to any one of these problems, the authors showed, would solve them all.

 

One of the many equivalent formulations they wrote about had been devised just a few years earlier by Nik Weaver, of Washington University in St. Louis. Weaver’s version distilled the problem down to a natural-sounding question about when it is possible to divide a collection of vectors into two groups that each point in roughly the same set of directions as the original collection. “It’s a beautiful problem that brought out the core combinatorial problem” at the heart of the Kadison-Singer question, Weaver said.

 

So Weaver was surprised when — apart from the mention in Casazza’s survey and one other paper that expressed skepticism about his approach — his formulation seemed to meet with radio silence. He thought no one had noticed his paper, but in fact it had attracted the attention of just the right people to solve it.

 

Electrical Properties

 

When Spielman learned about Weaver’s conjecture in 2008, he knew it was his kind of problem. There’s a natural way to switch between networks and collections of vectors, and Spielman had spent the preceding several years building up a powerful new approach to networks by viewing them as physical objects. If a network is thought of as an electrical circuit, for example, then the amount of current that runs through a given edge (instead of finding alternate routes) provides a natural way to measure that edge’s importance in the network.

 

Spielman discovered Weaver’s conjecture after Kalai introduced him to another form of the Kadison-Singer problem, and he realized that it was nearly identical to a simple question about networks: When is it possible to divide up the edges of a network into two categories — say, red edges and blue edges — so that the resulting red and blue networks have similar electrical properties to the whole network?

 

It’s not always possible to do this. For instance, if the original network consists of two highly connected clusters that are linked to each other by a single edge, then that edge has an outsize importance in the network. So if that critical edge is colored red, then the blue network can’t have similar electrical properties to the whole network. In fact, the blue network won’t even be connected.

 

Weaver’s problem asks whether this is the only type of obstacle to breaking down networks into similar but smaller ones. In other words, if there are enough ways to get around in a network — if no individual edge is too important — can the network be broken down into two subnetworks with similar electrical properties?

 

Spielman, Marcus and Srivastava suspected that the answer was yes, and their intuition did not just stem from their previous work on network sparsification. They also ran millions of simulations without finding any counterexamples. “A lot of our stuff was led by experimentation,” Marcus said. “Twenty years ago, the three of us sitting in the same room would not have solved this problem.”

 

The simulations convinced them that they were on the right track, even as the problem raised one stumbling block after another. And they kept making spurts of progress, enough to keep them hooked. When Marcus’ postdoctoral fellowship expired at the end of the team’s fourth year working on the problem, he elected to leave academia temporarily and join a local startup called Crisply rather than leave New Haven. “I worked for my company four days a week, and then once a week or so I would go to Yale,” he said.

 

A network’s electrical properties are governed by a special equation called the network’s “characteristic polynomial.” As the trio performed computer experiments on these polynomials, they found that the equations seemed to have hidden structure: Their solutions were always real numbers (as opposed to complex numbers), and, surprisingly, adding these polynomials together always seemed to result in a new polynomial with that same property. “These polynomials were doing more than we gave them credit for,” Marcus said. “We used them as a way of transferring knowledge, but really the polynomials seemed to be containing knowledge themselves.”

 

Piece by piece, the researchers developed a new technique for working with so-called “interlacing polynomials” to capture this underlying structure, and finally, on June 17, 2013, Marcus sent an email to Weaver, who had been his undergraduate advisor at Washington University 10 years earlier. “I hope you remember me,” Marcus wrote. “The reason I am writing is because we … think we have solved your conjecture (the one that you showed was equivalent to Kadison-Singer).” Within days, news of the team’s achievement had spread across the blogosphere.

 

The proof, which has since been thoroughly vetted, is highly original, Naor said. “What I love about it is just this feeling of freshness,” he said. “That’s why we want to solve open problems — for the rare events when somebody comes up with a solution that’s so different from what was before that it just completely changes our perspective.”

 

Computer scientists have already applied this new point of view to the “asymmetric” traveling salesman problem. In the traveling salesman problem, a salesman must travel through a series of cities, with the goal of minimizing the total distance traveled; the asymmetric version includes situations in which the distance from A to B differs from the distance from B to A (for instance, if the route includes one-way streets).

 

The best-known algorithm for finding approximate solutions to the asymmetric problem dates back to 1970, but no one knew how good its approximations were. Now, using ideas from the proof of the Kadison-Singer problem, Nima Anari, of the University of California, Berkeley, and Shayan Oveis Gharan, of the University of Washington in Seattle, have shown that this algorithm performs exponentially better than people had realized. The new result is “major, major progress,” Naor said.

The proof of the Kadison-Singer problem implies that all the constructions in its dozen incarnations can, in principle, be carried out — quantum knowledge can be extended to full quantum systems, networks can be decomposed into electrically similar ones, matrices can be broken into simpler chunks. The proof won’t change what quantum physicists do, but it could have applications in signal processing, since it implies that collections of vectors used to digitize signals can be broken down into smaller frames that can be processed faster. The theorem “has potential to affect some important engineering problems,” Casazza said.

 

But there’s a big gulf between principle and practice. The proof establishes that these various constructions exist, but it doesn’t say how to carry them out. At present, Casazza says, “there isn’t a chance in hell” of pulling a useful algorithm out of the proof. However, now that mathematicians know that the question has a positive answer, he hopes that a constructive proof will be forthcoming — not to mention a proof that mathematicians in his field can actually understand. “All of us were completely convinced it had a negative answer, so none of us was actually trying to prove it,” he said.

 

Mathematicians in the fields in which the Kadison-Singer problem has been prominent may feel wistful that three outsiders came in and solved “their” central problem, but that’s not what really happened, Marcus said. “The only reason we could even try to solve such a problem is because people in that field had already removed all the hardness that was happening” in C*-algebras, he said. “There was just one piece left, and that piece wasn’t a problem they had the techniques to solve. I think the reason why this problem lasted 50 years is because it really had two parts that were hard.”

 

Throughout the five years he spent working on the Kadison-Singer problem, Marcus said, “I don’t think I could have told you what the problem was in the C*-algebra language, because I had no clue.” The fact that he, Srivastava and Spielman were able to solve it “says something about what I hope will be the future of mathematics,” he said. When mathematicians import ideas across fields, “that’s when I think these really interesting jumps in knowledge happen.”

 

This article was reprinted on Wired.com. A version of this article also appeared on WNYC’s Hypothesis.

 

https://www.quantama...r-math-problem/



#17
Unity

Unity

    Information Organism

  • Members
  • PipPipPipPipPipPipPipPip
  • 2,442 posts

Mathematicians Chase Moonshine’s Shadow

 

Researchers are on the trail of a mysterious connection between number theory, algebra and string theory.

 

In 1978, the mathematician John McKay noticed what seemed like an odd coincidence. He had been studying the different ways of representing the structure of a mysterious entity called the monster group, a gargantuan algebraic object that, mathematicians believed, captured a new kind of symmetry. Mathematicians weren’t sure that the monster group actually existed, but they knew that if it did exist, it acted in special ways in particular dimensions, the first two of which were 1 and 196,883.

 

McKay, of Concordia University in Montreal, happened to pick up a mathematics paper in a completely different field, involving something called the j-function, one of the most fundamental objects in number theory. Strangely enough, this function’s first important coefficient is 196,884, which McKay instantly recognized as the sum of the monster’s first two special dimensions.

 

Most mathematicians dismissed the finding as a fluke, since there was no reason to expect the monster and the j-function to be even remotely related. However, the connection caught the attention of John Thompson, a Fields medalist now at the University of Florida in Gainesville, who made an additional discovery. The j-function’s second coefficient, 21,493,760, is the sum of the first three special dimensions of the monster: 1 + 196,883 + 21,296,876. It seemed as if the j-function was somehow controlling the structure of the elusive monster group.

 

Cheng-251x300.jpg

Tanja Kernweiss

 

Miranda Cheng of the University of Amsterdam and France’s National Center for Scientific Research.

 

Soon, two other mathematicians had demonstrated so many of these numerical relationships that it no longer seemed possible that they were mere coincidences. In a 1979 paper called “Monstrous Moonshine,” the pair — John Conway, now of Princeton University, and Simon Norton — conjectured that these relationships must result from some deep connection between the monster group and the j-function. “They called it moonshine because it appeared so far-fetched,” said Don Zagier, a director of the Max Planck Institute for Mathematics in Bonn, Germany. “They were such wild ideas that it seemed like wishful thinking to imagine anyone could ever prove them.”

 

It took several more years before mathematicians succeeded in even constructing the monster group, but they had a good excuse: The monster has more than 1053 elements, which is more than the number of atoms in a thousand Earths. In 1992, a decade after Robert Griess of the University of Michigan constructed the monster, Richard Borcherds tamed the wild ideas of monstrous moonshine, eventually earning a Fields Medal for this work. Borcherds, of the University of California, Berkeley, proved that there was a bridge between the two distant realms of mathematics in which the monster and the j-function live: namely, string theory, the counterintuitive idea that the universe has tiny hidden dimensions, too small to measure, in which strings vibrate to produce the physical effects we experience at the macroscopic scale.

 

Borcherds’ discovery touched off a revolution in pure mathematics, leading to a new field known as generalized Kac-Moody algebras. But from a string theory point of view, it was something of a backwater. The 24-dimensional string theory model that linked the j-function and the monster was far removed from the models string theorists were most excited about. “It seemed like just an esoteric corner of the theory, without much physical interest, although the math results were startling,” said Shamit Kachru, a string theorist at Stanford University.

 

Duncan-244x300.jpg

Courtesy of John Duncan

 

John Duncan of Case Western Reserve University.

 

But now moonshine is undergoing a renaissance, one that may eventually have deep implications for string theory. Over the past five years, starting with a discovery analogous to McKay’s, mathematicians and physicists have come to realize that monstrous moonshine is just the start of the story.

 

Last week, researchers posted a paper on arxiv.org presenting a numerical proof of the so-called Umbral Moonshine Conjecture, formulated in 2012, which proposes that in addition to monstrous moonshine, there are 23 other moonshines: mysterious correspondences between the dimensions of a symmetry group on the one hand, and the coefficients of a special function on the other. The functions in these new moonshines have their origins in a prescient letter by one of mathematics’ great geniuses, written more than half a century before moonshine was even a glimmer in the minds of mathematicians.

 

The 23 new moonshines appear to be intertwined with some of the most central structures in string theory, four-dimensional objects known as K3 surfaces. The connection with umbral moonshine hints at hidden symmetries in these surfaces, said Miranda Cheng of the University of Amsterdam and France’s National Center for Scientific Research, who originated the Umbral Moonshine Conjecture together with John Duncan, of Case Western Reserve University in Cleveland, Ohio, and Jeffrey Harvey, of the University of Chicago. “This is important, and we need to understand it,” she said.

The new proof strongly suggests that in each of the 23 cases, there must be a string theory model that holds the key to understanding these otherwise baffling numerical correspondences. But the proof doesn’t go so far as to actually construct the relevant string theory models, leaving physicists with a tantalizing problem. “At the end of the day when we understand what moonshine is, it will be in terms of physics,” Duncan said.

 

Square-300x300.jpg

Olena Shmahalo/Quanta Magazine

 

Rotating a square 90 degrees and then reflecting it horizontally has the same effect as reflecting it across a diagonal, so in the language of square symmetry arithmetic, 90-degree rotation + horizontal reflection = diagonal reflection.

 

Monstrous Moonshine

 

The symmetries of any given shape have a natural sort of arithmetic to them. For example, rotating a square 90 degrees and then flipping it horizontally is the same as flipping it across a diagonal — in other words, “90-degree rotation + horizontal flip = diagonal flip.” During the 19th century, mathematicians realized that they could distill this type of arithmetic into an algebraic entity called a group. The same abstract group can represent the symmetries of many different shapes, giving mathematicians a tidy way to understand the commonalities in different shapes.

 

Over much of the 20th century, mathematicians worked to classify all possible groups, and they gradually discovered something strange: While most simple finite groups fell into natural categories, there were 26 oddballs that defied categorization. Of these, the biggest, and the last to be discovered, was the monster.

 

Tiling-300x299.jpg

Wikimedia Commons: Tom Ruen

 

Modular functions have repeating patterns related to the tiling above.

 

Before McKay’s serendipitous discovery nearly four decades ago, there was no reason to think the monster group had anything to do with the j-function, the second protagonist of the monstrous-moonshine story. The j-function belongs to a special class of functions whose graphs have repeating patterns similar to M. C. Escher’s tessellation of a disk with angels and devils, which shrink ever smaller as they approach the outer boundary. These “modular” functions are the heroes of number theory, playing a crucial role, for instance, in Andrew Wiles’ 1994 proof of Fermat’s Last Theorem. “Any time you hear about a striking result in number theory, there’s a high chance that it’s really a statement about modular forms,” Kachru said.

 

As with a sound wave, the j-function’s repeating pattern can be broken down into a collection of pure tones, so to speak, with coefficients indicating how “loud” each tone is. It is in these coefficients that McKay found the link to the monster group.

In the early 1990s, building on work by Igor Frenkel of Yale University, James Lepowsky of Rutgers University and Arne Meurman of Lund University in Sweden, Borcherds made sense of McKay’s discovery by showing that there is a particular string theory model in which the j-function and the monster group both play roles. The coefficients of the j-function count the ways strings can oscillate at each energy level. And the monster group captures the model’s symmetry at those energy levels.

 

The finding gave mathematicians a way to study the mind-bogglingly large monster group using the j-function, whose coefficients are easy to calculate. “Math is all about building bridges where on one side you see more clearly than on the other,” Duncan said. “But this bridge was so unexpectedly powerful that before you see the proof it’s kind of crazy.”

 

New Moonshine

 

While mathematicians explored the ramifications of monstrous moonshine, string theorists focused on a seemingly different problem: figuring out the geometry for the tiny dimensions in which strings are hypothesized to live. Different geometries allow strings to vibrate in different ways, just as tightening the tension on a drum changes its pitch. For decades, physicists have struggled to find a geometry that produces the physical effects we see in the real world.

 

An important ingredient in some of the most promising candidates for such a geometry is a collection of four-dimensional shapes known as K3 surfaces. In contrast with Borcherds’ string theory model, Kachru said, K3 surfaces fill the string theory textbooks.

 

Not enough is known about the geometry of K3 surfaces to count how many ways strings can oscillate at each energy level, but physicists can write down a more limited function counting certain physical states that appear in all K3 surfaces. In 2010, three string theorists — Tohru Eguchi of Kyoto University in Japan, Hirosi Ooguri of the California Institute of Technology in Pasadena, and Yuji Tachikawa of the University of Tokyo in Japan — noticed that if they wrote this function in a particular way, out popped coefficients that were the same as some special dimensions of another oddball group, called the Mathieu 24 (M24) group, which has nearly 250 million elements. The three physicists had discovered a new moonshine.

This time, physicists and mathematicians were all over the discovery. “I was at several conferences, and all the talk was about this new Mathieu moonshine,” Zagier said.

 

Jeff-229x300.jpg

University of Chicago

 

Jeffrey Harvey of the University of Chicago.

 

Zagier attended one such conference in Zurich in July 2011, and there, Duncan wrote in an email, Zagier showed him “a piece of paper with lots of numbers on it” — the coefficients of some functions Zagier was studying called “mock modular” forms, which are related to modular functions. “Don [Zagier] pointed to a particular line of numbers and asked me — in jest, I think — if there is any finite group related to them,” Duncan wrote.

 

Duncan wasn’t sure, but he recognized the numbers on another line: They belonged to the special dimensions of a group called M12. Duncan buttonholed Miranda Cheng, and the two pored over the rest of Zagier’s piece of paper. The pair, together with Jeffrey Harvey, gradually realized that there was much more to the new moonshine than just the M24 example. The clue to the full moonshine picture, they found, lay in the nearly century-old writings of one of mathematics’ legendary figures.

 

Moonshine’s Shadows

 

In 1913, the English mathematician G. H. Hardy received a letter from an accounting clerk in Madras, India, describing some mathematical formulas he had discovered. Many of them were old hat, and some were flat-out wrong, but on the final page were three formulas that blew Hardy’s mind. “They must be true,” wrote Hardy, who promptly invited the clerk, Srinivasa Ramanujan, to England, “because, if they were not true, no one would have the imagination to invent them.”

 

RamanujanLetters-480x640.jpg

Courtesy of Ken Ono

 

Srinivasa Ramanujan’s final letter to G. H. Hardy in 1920, explaining his discovery of what he called “mock theta” functions.

 

Ramanujan became famous for seemingly pulling mathematical relationships out of thin air, and he credited many of his discoveries to the goddess Namagiri, who appeared to him in visions, he said. His mathematical career was tragically brief, and in 1920, as he lay dying in India at age 32, he wrote Hardy another letter saying that he had discovered what he called “mock theta” functions, which entered into mathematics “beautifully.” Ramanujan listed 17 examples of these functions, but didn’t explain what they had in common. The question remained open for more than eight decades, until Sander Zwegers, then a graduate student of Zagier’s and now a professor at the University of Cologne in Germany, figured out in 2002 that they are all examples of what came to be known as mock modular forms.

 

After the Zurich moonshine conference, Cheng, Duncan and Harvey gradually figured out that M24 moonshine is one of 23 different moonshines, each making a connection between the special dimensions of a group and the coefficients of a mock modular form — just as monstrous moonshine made a connection between the monster group and the j-function. For each of these moonshines, the researchers conjectured, there is a string theory like the one in monstrous moonshine, in which the mock modular form counts the string states and the group captures the model’s symmetry. A mock modular form always has an associated modular function called its “shadow,” so they named their hypothesis the Umbral Moonshine Conjecture — umbra is Latin for “shadow.” Many of the mock modular forms that appear in the conjecture are among the 17 special examples Ramanujan listed in his prophetic letter.

 

Curiously enough, Borcherds’ earlier proof of monstrous moonshine also builds on work by Ramanujan: The algebraic objects at the core of the proof were discovered by Frenkel, Lepowsky and Meurman as they analyzed the three formulas that had so startled Hardy in Ramanujan’s first letter. “It’s amazing that these two letters form the cornerstone of what we know about moonshine,” said Ken Ono, of Emory University in Atlanta, Ga. “Without either letter, we couldn’t write this story.”

 

Finding the Beast

 

In the new paper posted on arxiv.org, Duncan, Ono and Ono’s graduate student Michael Griffin have come up with a numerical proof of the Umbral Moonshine Conjecture (one case of which — the M24 case — had already been proven by Terry Gannon, of the University of Alberta in Edmonton, Canada). The new analysis provides only hints of where physicists should look for the string theories that will unite the groups and the mock modular forms. Nevertheless, the proof confirms that the conjecture is on the right track, Harvey said. “We had all this structure, and it was so intricate and compelling that it was hard not to think there was some truth to it,” he said. “Having a mathematical proof makes it a solid piece of work that people can think seriously about.”

 

KenOno-251x300.jpg

Emory University

 

Ken Ono of Emory University.

 

The string theory underlying umbral moonshine is likely to be “not just any physical theory, but a particularly important one,” Cheng said. “It suggests that there’s a special symmetry acting on the physical theory of K3 surfaces.” Researchers studying K3 surfaces can’t see this symmetry yet, she said, suggesting that “there is probably a better way of looking at that theory that we haven’t found yet.”

 

Physicists are also excited about a highly conjectural connection between moonshine and quantum gravity, the as-yet-undiscovered theory that will unite general relativity and quantum mechanics. In 2007, the physicist Edward Witten, of the Institute for Advanced Study in Princeton, N.J., speculated that the string theory in monstrous moonshine should offer a way to construct a model of three-dimensional quantum gravity, in which 194 natural categories of elements in the monster group correspond to 194 classes of black holes. Umbral moonshine may lead physicists to similar conjectures, giving hints of where to look for a quantum gravity theory. “That is a big hope for the field,” Duncan said.

 

The new numerical proof of the Umbral Moonshine Conjecture is “like looking for an animal on Mars and seeing its footprint, so we know it’s there,” Zagier said. Now, researchers have to find the animal — the string theory that would illuminate all these deep connections. “We really want to get our hands on it,” Zagier said.

 

This article was reprinted on ScientificAmerican.com.

 

https://www.quantama...nshines-shadow/



#18
Unity

Unity

    Information Organism

  • Members
  • PipPipPipPipPipPipPipPip
  • 2,442 posts
Poor Johannes Kepler. One of the greatest astronomers ever, the man who figured out the laws of planetary motion, a genius, scholar and mathematician — in 1611, he needed a wife. The previous Mrs. Kepler had died of Hungarian spotted fever, so, with kids to raise and a household to manage, he decided to line up some candidates — but it wasn't going very well.
 
Being an orderly man, he decided to interview 11 women. As Alex Bellos describes it in his new book The Grapes of Math, Kepler kept notes as he wooed. It's a catalog of small disappointments. The first candidate, he wrote, had "stinking breath."
 
The second "had been brought up in luxury that was above her station" — she had expensive tastes. Not promising.
 
The third was engaged to a man — definitely a problem. Plus, that man had sired a child with a prostitute. So ... complicated.
 
The fourth woman was nice to look at — of "tall stature and athletic build" ...
 
... but Kepler wanted to check out the next one (the fifth), who, he'd been told, was "modest, thrifty, diligent and [said] to love her stepchildren," so he hesitated. He hesitated so long, that both No. 4 and No. 5 got impatient and took themselves out of the running (bummer), leaving him with No. 6, who scared him. She was a grand lady, and he "feared the expense of a sumptuous wedding ... "
 
The seventh was very fetching. He liked her. But he hadn't yet completed his list, so he kept her waiting, and she wasn't the waiting type. She rejected him.
 
The eighth he didn't much care for, though he thought her mother "was a mostly worthy person ... "
 
The ninth was sickly, the 10th had a shape not suitable "even for a man of simple tastes," and the last one, the 11th, was too young. What to do? Having run through all his candidates, totally wooed-out, he decided that maybe he'd done this all wrong.
 
"Was it Divine Providence or my own moral guilt," he wrote, "which, for two years or longer, tore me in so many different directions and made me consider the possibility of such different unions?"
 
Game On
 
What Kepler needed, Alex Bellos writes, was an optimal strategy — a way, not to guarantee success, but to maximize the likelihood of satisfaction. And, as it turns out, mathematicians think they have such a formula.
 
It works any time you have a list of potential wives, husbands, prom dates, job applicants, garage mechanics. The rules are simple: You start with a situation where you have a fixed number of options (if, say, you live in a small town and there aren't unlimited men to date, garages to go to), so you make a list — that's your final list — and you interview each candidate one by one. Again, what I'm about to describe doesn't always produce a happy result, but it does so more often than would occur randomly. For mathematicians, that's enough.
 
They even have a name for it. In the 1960s it was called (a la Kepler) "The Marriage Problem." Later, it was dubbed The Secretary Problem.
 
How To Do It
 
Alex writes: "Imagine that you are interviewing 20 people to be your secretary [or your spouse or your garage mechanic] with the rule that you must decide at the end of each interview whether or not to give that applicant the job." If you offer the job to somebody, game's up. You can't go on and meet the others. "If you haven't chosen anyone by the time you see the last candidate, you must offer the job to her," Alex writes (not assuming that all secretaries are female — he's just adapting the attitudes of the early '60s).
 
So remember: At the end of each interview, you either make an offer or you move on.
 
If you don't make an offer, no going back. Once you make an offer, the game stops.
 
According to Martin Gardner, who in 1960 described the formula (partly worked out earlier by others), the best way to proceed is to interview (or date) the first 36.8 percent of the candidates. Don't hire (or marry) any of them, but as soon as you meet a candidate who's better than the best of that first group — that's the one you choose! Yes, the Very Best Candidate might show up in that first 36.8 percent — in which case you'll be stuck with second best, but still, if you like favorable odds, this is the best way to go.
 
Why 36.8 percent? The answer involves a number mathematicians call "e" – which, reduced to a fraction 1/e = 0.368 or 36.8 percent. For the specific details, check here, or Alex's book, but apparently this formula has proved itself over and over in all kinds of controlled situations. While it doesn't guarantee happiness or satisfaction, it does give you a 36.8 percent chance — which, in a field of 11 possible wives — is a pretty good success rate.
 
Try It, Johannes ...
 
What would have happened if Johannes Kepler had used this formula? Well, he would have interviewed but made no offers to the first 36.8 percent of his sample, which in a group of 11 ladies means he'd skip past the first four candidates. But the moment he'd met somebody (starting with lady No. 5) that he liked better than anyone in the first group, he'd have said, "Will you marry me?"
 
In real life, after a period of reflection, Johannes Kepler re-wooed and then married the fifth woman.
 
The way Alex figures it, if Kepler had known about this formula (which today is an example of what mathematicians call optimal stopping), he could have skipped the last batch of ladies — the sickly one, the unshapely one, the too-young one, the lung-disease one — and, all in all, "Kepler would have saved himself six bad dates."
 
Instead, he just followed his heart (which, of course, is another tolerable option, even for great mathematicians). His marriage to No. 5, by the way, turned out to be a very happy one.
 

  • Infinite likes this

#19
Unity

Unity

    Information Organism

  • Members
  • PipPipPipPipPipPipPipPip
  • 2,442 posts
Xinwen Zhu discusses the unifying theory of mathematics
 
December 8, 2014 by Jessica Stoller-Conrad
 
xinwenzhudis.jpg
 
Xinwen Zhu, associate professor of mathematics. Credit: Lance Hayashida/Caltech Marketing and Communications
 
In 1994, British mathematician Andrew Wiles successfully developed a proof for Fermat's last theorem—a proof that was once partially scribbled in a book margin by 17th-century mathematician Pierre de Fermat but subsequently eluded even the best minds for more than 300 years. Wiles's hard-won success came after digging into a vast web of mathematical conjectures called the Langlands program. The Langlands program, proposed by Canadian mathematician Robert Phelan Langlands in the 1960s, acts as a bridge between seemingly unrelated disciplines in mathematics, such as number theory—the study of prime numbers and other integers—and more visual disciplines such as geometry.
 
However, to get the ideas he needed for his history-making proof, Wiles only scratched the surface of the Langlands program. Now Xinwen Zhu, an associate professor of mathematics at Caltech, is digging deeper, looking for further applications of this so-called unifying theory of mathematics—and how it can be used to relate number theory to disciplines ranging from quantum physics to the study of donut-shaped geometric surfaces.
Zhu came to Caltech from Northwestern University in September. Originally from Sichuan, China, he received his bachelor's degree from Peking University in 2004 and his doctorate from UC Berkeley in 2009.
He recently spoke with us about his work, the average day of a mathematician, and his new life in California.
Can you give us a general description of your research?
 
My work is in mathematics, related to what's called Langlands program. It's one of the most intrinsic parts of modern mathematics. It relates number theory—specifically the study of prime numbers like 2, 3, 5, 7, and so on—to topics as diverse as geometry and quantum physics.
 
Why do you want to connect number theory to geometry and quantum physics?
 
Compared to number theory, geometry is more intuitive. You can see a shape and understand the mathematics that are involved in making that shape. Number theory is just numbers—in this case, just prime numbers. But if we combine the two, then instead of thinking about the primes as numbers, we can visualize them as points on a Riemann surface—a geometric surface kind of represented by the shape of a donut—and the points can move continuously. Think of an ant on a donut—the ant can move freely on the surface. This means that a point on the donut has some intrinsic connections with the points nearby. In number theory it is very difficult to say that any relationship exists between two primes, say 5 and 7, because there are no other primes between them, but there are points between any two points. It is still very difficult to envision, but it gives us a more intuitive way to think about the numbers.
 
We want to understand certain things about prime numbers—for example, the distribution of primes among all natural numbers. But that's difficult when you're working with just the numbers; there are very few rules, and everything is unpredictable. The geometric theory here adds a sort of geometric intuition, and the application to quantum field theory adds a physical intuition. Thinking about the numbers and equations in these contexts can give us new insights. I really don't understand exactly how physicists think, but physicists are very smart because they have this intuition. It's just sort of their nature. They can always make the right guess or conjecture. So our hope is to use this sort of intuition to come back to understand what happens in number theory.
 
Mathematicians don't really have lab spaces or equipment for experiments, so what does a day at the office look like for you?
 
Usually I just think. And unfortunately, it's usually without any result, but that's fine. Then, after months and months, one day there is an idea. And that's how we do math. We read papers sometimes to keep our eyes on what the newest development is, but it's probably not as important as it is for other disciplines. Of course, one can also get new ideas and stimulation from reading, so we keep our eyes on what's going on this week.
 
A two-part question: How did you get first get interested in math in general, and how did you get interested in this particular field that you're in now?
 
My interest in math began when I was a child. People can usually count numbers at a pretty early age, but I was interested in math and could do calculations a little bit quicker and a bit younger than others. It came naturally to me. Also, my grandfather was a chemist and physicist, and he always emphasized the importance of math.
But to be honest, I didn't really know anything about this aspect of the Langlands program until I was in graduate school at Berkeley. My adviser, Edward Frenkel, brought me into this area.
 
What are you most excited about in terms of your move to Caltech?
 
I think this is, of course, a fantastic place. The undergraduates here are very strong, and the graduate school is also very good, so I'm also very excited to work with all of those young people. Also, the physics department here is very good, and as I said, quantum field theory has recently provided promising new ways to think about these old problems from number theory. Caltech professors Anton Kapustin and Sergei Gukov have played central roles in revealing these connections between physics and the Langlands problem.
 
Is there anything else that you're looking forward to about living in Pasadena?
 
I'm from Sichuan [province in China], and one thing that I miss is the food. It's hot and spicy, and now it's also kind of popular in the U.S. And there are very good Szechwan restaurants in the San Gabriel Valley. Actually, maybe the best Szechwan food in the U.S. is right here.
 
Aside from your research and professional interests, do you have any other hobbies?
 
Yes, I've been playing the game Go for more than 20 years. It's a board game that is kind of like chess. It's interesting, and it's very complicated. Many years ago, you'd play with a game set and one opponent, but now you can also play it online. And that's good for me because after moving from place to place, it's hard to consistently find someone to play with.
 
 
 
 

  • Outlook likes this

#20
Outlook

Outlook

    Muslim

  • Members
  • PipPipPipPipPipPip
  • 676 posts
  • LocationWherever Alien Observer by Grouper could be played.

The thread you put together here is such a great thread. Keep up the good work.


  • Unity and Infinite like this
Everything is predefined.




0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users