Register now to gain access to all of our features. Once registered and logged in, you will be able to create topics, post replies to existing threads, give reputation to your fellow members, get your own private messenger, post status updates, manage your profile and so much more. If you already have an account, login here - otherwise create an account for free today!
These ads will disappear if you register on the forum
Math News and Dicussion
Posted 29 June 2016 - 11:25 PM
Posted 29 June 2016 - 11:39 PM
Posted 29 June 2016 - 11:43 PM
Posted 29 June 2016 - 11:49 PM
Posted 03 July 2016 - 07:38 AM
Nature inspires applied math project that seeks to improve traffic flow
June 13, 2016 by Adam Zewe
Tiny and industrious, ants are models of teamwork and efficiency. The picnic-wrecking insects could also teach city planners a thing or two about how to optimize the timing of traffic signals, according to students at the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS).
For their final project in "Advanced Optimization" (AM 221), taught by Yaron Singer, assistant professor of computer science, applied math concentrators Robert Chen, A.B. '17, and Alex Wang, A.B. '17, used a heuristic technique called "ant colony optimization" (ACO) to find the most efficient time settings for traffic signals on a grid of city streets.
"Efficiently controlling traffic light cycles could have major benefits for the public, since people spend more and more time sitting in traffic," Wang said. "Optimization could reduce wait times at intersections across an entire city, saving time and money for a lot of people."
Traffic systems have often been the target of different modeling techniques, but the inherent complexity of traffic flow, which becomes even more complicated when additional intersections and light cycles are added, makes it difficult to optimize mathematically, Chen explained.
So Wang and Chen took a page from Mother Nature's book. In ACO, simulated "ants" traverse a graph that maps all possible solutions to the problem—in this case, all possible light cycle combinations on a square grid of city streets.
"These simulated ants deposit pheromone chemicals on the paths that they take. They will deposit more pheromones on solutions that are better—more efficient—and this encourages more ants to follow that path and converge on the optimal solution," Wang explained.
The ants traverse all the edges and vertices of the graph, but they travel over the edges with the highest pheromone values more often. In this case, these well-traveled edges represented the optimal timings for each traffic light, and were more likely to be used in future iterations as the simulation progressed.
The large number of possible traffic light settings initially made this approach computationally unfeasible. Chen and Wang developed a novel solution: they adjusted the graph so that pairs of vertices shared the edges, reducing the total number possibilities that needed to be explored in each round of simulation. They further enhanced their study by taking into account the relative ranking of ants, as opposed to evaluating the ants on an absolute scale, which is typical in ACO. In each round of the simulation, how much pheromone the ants deposited compared to other ants determined the optimal solutions used in the next round of simulation.
"The adjustments we've made to ACO are actually reminiscent of several machine learning techniques, so it would be interesting to apply this ACO method to other problems that are generally tackled with machine learning and see if there are any differences," Chen said.
In the end, their modified ACO system led to a 7 percent decrease in mean travel time over standard baselines used in some cities. In practice, rather than simply timing each green light for an arbitrary 30 seconds, the ACO model might lead city planners to time a green light at one intersection for 20 seconds, while another would stay green for 40 seconds. With this theoretical model, the next step would be to apply the techniques to real-world city data to see how it works with a dynamic traffic system, said Wang.
For Wang and Chen, the biggest lesson they learned from the project—aside from the power of mathematics to solve real-world problems—is that experimentation doesn't always lead to expected results. At the start of their research, they thought the ACO method would be too complex to effectively solve their problem.
"So often in research projects, the intuitive methods are the ones that perform better, but you never truly know what you are going to find," said Chen. "Whatever avenues you start exploring, it is critical to put your assumptions aside and consider your next steps based on what you see from the results."
Read more at: http://phys.org/news...raffic.html#jCp
Posted 10 July 2016 - 07:46 PM
Mathematical framework offers a more detailed understanding of network relationships
Posted 10 July 2016 - 08:23 PM
Mathematical model gives insights into cotranscriptional folding of RNA
Posted 16 July 2016 - 08:48 PM
Posted 16 August 2016 - 02:30 AM
Researchers have uncovered deep connections among different types of random objects, illuminating hidden geometric structures.
Randomness increases in a structure known as an “SLE curve.”
August 2, 2016
Standard geometric objects can be described by simple rules — every straight line, for example, is just y = ax + b — and they stand in neat relation to each other: Connect two points to make a line, connect four line segments to make a square, connect six squares to make a cube.
These are not the kinds of objects that concern Scott Sheffield. Sheffield, a professor of mathematics at the Massachusetts Institute of Technology, studies shapes that are constructed by random processes. No two of them are ever exactly alike. Consider the most familiar random shape, the random walk, which shows up everywhere from the movement of financial asset prices to the path of particles in quantum physics. These walks are described as random because no knowledge of the path up to a given point can allow you to predict where it will go next.
Beyond the one-dimensional random walk, there are many other kinds of random shapes. There are varieties of random paths, random two-dimensional surfaces, random growth models that approximate, for example, the way a lichen spreads on a rock. All of these shapes emerge naturally in the physical world, yet until recently they’ve existed beyond the boundaries of rigorous mathematical thought. Given a large collection of random paths or random two-dimensional shapes, mathematicians would have been at a loss to say much about what these random objects shared in common.
Yet in work over the past few years, Sheffield and his frequent collaborator, Jason Miller, a professor at the University of Cambridge, have shown that these random shapes can be categorized into various classes, that these classes have distinct properties of their own, and that some kinds of random objects have surprisingly clear connections with other kinds of random objects. Their work forms the beginning of a unified theory of geometric randomness.
“You take the most natural objects — trees, paths, surfaces — and you show they’re all related to each other,” Sheffield said. “And once you have these relationships, you can prove all sorts of new theorems you couldn’t prove before.”
In the coming months, Sheffield and Miller will publish the final part of a three-paper series that for the first time provides a comprehensive view of random two-dimensional surfaces — an achievement not unlike the Euclidean mapping of the plane.
“Scott and Jason have been able to implement natural ideas and not be rolled over by technical details,” said Wendelin Werner, a professor at ETH Zurich and winner of the Fields Medal in 2006 for his work in probability theory and statistical physics. “They have been basically able to push for results that looked out of reach using other approaches.”
A Random Walk on a Quantum String
In standard Euclidean geometry, objects of interest include lines, rays, and smooth curves like circles and parabolas. The coordinate values of the points in these shapes follow clear, ordered patterns that can be described by functions. If you know the value of two points on a line, for instance, you know the values of all other points on the line. The same is true for the values of the points on each of the rays in this first image, which begin at a point and radiate outward.
One way to begin to picture what random two-dimensional geometries look like is to think about airplanes. When an airplane flies a long-distance route, like the route from Tokyo to New York, the pilot flies in a straight line from one city to the other. Yet if you plot the route on a map, the line appears to be curved. The curve is a consequence of mapping a straight line on a sphere (Earth) onto a flat piece of paper.
If Earth were not round, but were instead a more complicated shape, possibly curved in wild and random ways, then an airplane’s trajectory (as shown on a flat two-dimensional map) would appear even more irregular, like the rays in the following images.
Each ray represents the trajectory an airplane would take if it started from the origin and tried to fly as straight as possible over a randomly fluctuating geometric surface. The amount of randomness that characterizes the surface is dialed up in the next images — as the randomness increases, the straight rays wobble and distort, turn into increasingly jagged bolts of lightning, and become nearly incoherent.
Yet incoherent is not the same as incomprehensible. In a random geometry, if you know the location of some points, you can (at best) assign probabilities to the location of subsequent points. And just like a loaded set of dice is still random, but random in a different way than a fair set of dice, it’s possible to have different probability measures for generating the coordinate values of points on random surfaces.
What mathematicians have found — and hope to continue to find — is that certain probability measures on random geometries are special, and tend to arise in many different contexts. It is as though nature has an inclination to generate its random surfaces using a very particular kind of die (one with an uncountably infinite number of sides). Mathematicians like Sheffield and Miller work to understand the properties of these dice (and the “typical” properties of the shapes they produce) just as precisely as mathematicians understand the ordinary sphere.
The first kind of random shape to be understood in this way was the random walk. Conceptually, a one-dimensional random walk is the kind of path you’d get if you repeatedly flipped a coin and walked one way for heads and the other way for tails. In the real world, this type of movement first came to attention in 1827 when the English botanist Robert Brown observed the random movements of pollen grains suspended in water. The seemingly random motion was caused by individual water molecules bumping into each pollen grain. Later, in the 1920s, Norbert Wiener of MIT gave a precise mathematical description of this process, which is called Brownian motion.
Brownian motion is the “scaling limit” of random walks — if you consider a random walk where each step size is very small, and the amount of time between steps is also very small, these random paths look more and more like Brownian motion. It’s the shape that almost all random walks converge to over time.
Two-dimensional random spaces, in contrast, first preoccupied physicists as they tried to understand the structure of the universe.
In string theory, one considers tiny strings that wiggle and evolve in time. Just as the time trajectory of a point can be plotted as a one-dimensional curve, the time trajectory of a string can be understood as a two-dimensional curve. This curve, called a worldsheet, encodes the history of the one-dimensional string as it wriggles through time.
“To make sense of quantum physics for strings,” said Sheffield, “you want to have something like Brownian motion for surfaces.”
For years, physicists have had something like that, at least in part. In the 1980s, physicist Alexander Polyakov, who’s now at Princeton University, came up with a way of describing these surfaces that came to be called Liouville quantum gravity (LQG). It provided an incomplete but still useful view of random two-dimensional surfaces. In particular, it gave physicists a way of defining a surface’s angles so that they could calculate the surface area.
In parallel, another model, called the Brownian map, provided a different way to study random two-dimensional surfaces. Where LQG facilitates calculations about area, the Brownian map has a structure that allows researchers to calculate distances between points. Together, the Brownian map and LQG gave physicists and mathematicians two complementary perspectives on what they hoped were fundamentally the same object. But they couldn’t prove that LQG and the Brownian map were in fact compatible with each other.
“It was this weird situation where there were two models for what you’d call the most canonical random surface, two competing random surface models, that came with different information associated with them,” said Sheffield.
Beginning in 2013, Sheffield and Miller set out to prove that these two models described fundamentally the same thing.
The Problem With Random Growth
Sheffield and Miller began collaborating thanks to a kind of dare. As a graduate student at Stanford in the early 2000s, Sheffield worked under Amir Dembo, a probability theorist. In his dissertation, Sheffield formulated a problem having to do with finding order in a complicated set of surfaces. He posed the question as a thought exercise as much as anything else.
“I thought this would be a problem that would be very hard and take 200 pages to solve and probably nobody would ever do it,” Sheffield said.
But along came Miller. In 2006, a few years after Sheffield had graduated, Miller enrolled at Stanford and also started studying under Dembo, who assigned him to work on Sheffield’s problem as way of getting to know random processes. “Jason managed to solve this, I was impressed, we started working on some things together, and eventually we had a chance to hire him at MIT as a postdoc,” Sheffield said.
In order to show that LQG and the Brownian map were equivalent models of a random two-dimensional surface, Sheffield and Miller adopted an approach that was simple enough conceptually. They decided to see if they could invent a way to measure distance on LQG surfaces and then show that this new distance measurement was the same as the distance measurement that came packaged with the Brownian map.
To do this, Sheffield and Miller thought about devising a mathematical ruler that could be used to measure distance on LQG surfaces. Yet they immediately realized that ordinary rulers would not fit nicely into these random surfaces — the space is so wild that one cannot move a straight object around without the object getting torn apart.
The duo forgot about rulers. Instead, they tried to reinterpret the distance question as a question about growth. To see how this works, imagine a bacterial colony growing on some surface. At first it occupies a single point, but as time goes on it expands in all directions. If you wanted to measure the distance between two points, one (seemingly roundabout) way of doing that would be to start a bacterial colony at one point and measure how much time it took the colony to encompass the other point. Sheffield said that the trick is to somehow “describe this process of gradually growing a ball.”
It’s easy to describe how a ball grows in the ordinary plane, where all points are known and fixed and growth is deterministic. Random growth is far harder to describe and has long vexed mathematicians. Yet as Sheffield and Miller were soon to learn, “[random growth] becomes easier to understand on a random surface than on a smooth surface,” said Sheffield. The randomness in the growth model speaks, in a sense, the same language as the randomness on the surface on which the growth model proceeds. “You add a crazy growth model on a crazy surface, but somehow in some ways it actually makes your life better,” he said.
The following images show a specific random growth model, the Eden model, which describes the random growth of bacterial colonies. The colonies grow through the addition of randomly placed clusters along their boundaries. At any given point in time, it’s impossible to know for sure where on the boundary the next cluster will appear. In these images, Miller and Sheffield show how Eden growth proceeds over a random two-dimensional surface.
The first image shows Eden growth on a fairly flat — that is, not especially random — LQG surface. The growth proceeds in an orderly way, forming nearly concentric circles that have been color-coded to indicate the time at which growth occurs at different points on the surface.
In subsequent images, Sheffield and Miller illustrate growth on surfaces of increasingly greater randomness. The amount of randomness in the function that produces the surfaces is controlled by a constant, gamma. As gamma increases, the surface gets rougher — with higher peaks and lower valleys — and random growth on that surface similarly takes on a less orderly form. In the previous image, gamma is 0.25. In the next image, gamma is set to 1.25, introducing five times as much randomness into the construction of the surface. Eden growth across this uncertain surface is similarly distorted.
When gamma is set to the square root of eight-thirds (approximately 1.63), LQG surfaces fluctuate even more dramatically. They also take on a roughness that matches the roughness of the Brownian map, which allows for more direct comparisons between these two models of a random geometric surface.
Random growth on such a rough surface proceeds in a very irregular way. Describing it mathematically is like trying to anticipate minute pressure fluctuations in a hurricane. Yet Sheffield and Miller realized that they needed to figure out how to model Eden growth on very random LQG surfaces in order to establish a distance structure equivalent to the one on the (very random) Brownian map.
“Figuring out how to mathematically make [random growth] rigorous is a huge stumbling block,” said Sheffield, noting that Martin Hairer of the University of Warwick won the Fields Medal in 2014 for work that overcame just these kinds of obstacles. “You always need some kind of amazing clever trick to do it.”
Sheffield and Miller’s clever trick is based on a special type of random one-dimensional curve that is similar to the random walk except that it never crosses itself. Physicists had encountered these kinds of curves for a long time in situations where, for instance, they were studying the boundary between clusters of particles with positive and negative spin (the boundary line between the clusters of particles is a one-dimensional path that never crosses itself and takes shape randomly). They knew these kinds of random, noncrossing paths occurred in nature, just as Robert Brown had observed that random crossing paths occurred in nature, but they didn’t know how to think about them in any kind of precise way. In 1999 Oded Schramm, who at the time was at Microsoft Research in Redmond, Washington, introduced the SLE curve (for Schramm-Loewner evolution) as the canonical noncrossing random curve.
Schramm’s work on SLE curves was a landmark in the study of random objects. It’s widely acknowledged that Schramm, who died in a hiking accident in 2008, would have won the Fields Medal had he been a few weeks younger at the time he’d published his results. (The Fields Medal can be given only to mathematicians who are not yet 40.) As it was, two people who worked with him built on his work and went on to win the prize: Wendelin Werner in 2006 and Stanislav Smirnov in 2010. More fundamentally, the discovery of SLE curves made it possible to prove many other things about random objects.
“As a result of Schramm’s work, there were a lot of things in physics they’d known to be true in their physics way that suddenly entered the realm of things we could prove mathematically,” said Sheffield, who was a friend and collaborator of Schramm’s.
For Miller and Sheffield, SLE curves turned out to be valuable in an unexpected way. In order to measure distance on LQG surfaces, and thus show that LQG surfaces and the Brownian map were the same, they needed to find some way to model random growth on a random surface. SLE proved to be the way.
“The ‘aha’ moment was [when we realized] you can construct [random growth] using SLEs and that there is a connection between SLEs and LQG,” said Miller.
SLE curves come with a constant, kappa, which plays a similar role to the one gamma plays for LQG surfaces. Where gamma describes the roughness of an LQG surface, kappa describes the “windiness” of SLE curves. When kappa is low, the curves look like straight lines. As kappa increases, more randomness is introduced into the function that constructs the curves and the curves turn more unruly, while obeying the rule that they can bounce off of, but never cross, themselves. Here is an SLE curve with kappa equal to 0.5, followed by an SLE curve with kappa equal to 3.
Sheffield and Miller noticed that when they dialed the value of kappa to 6 and gamma up to the square root of eight-thirds, an SLE curve drawn on the random surface followed a kind of exploration process. Thanks to works by Schramm and by Smirnov, Sheffield and Miller knew that when kappa equals 6, SLE curves follow the trajectory of a kind of “blind explorer” who marks her path by constructing a trail as she goes. She moves as randomly as possible except that whenever she bumps into a piece of the path she has already followed, she turns away from that piece to avoid crossing her own path or getting stuck in a dead end.
“[The explorer] finds that each time her path hits itself, it cuts off a little piece of land that is completely surrounded by the path and can never be visited again,” said Sheffield.
Sheffield and Miller then considered a bacterial growth model, the Eden model, that had a similar effect as it advanced across a random surface: It grew in a way that “pinched off” a plot of terrain that, afterward, it never visited again. The plots of terrain cut off by the growing bacteria colony looked exactly the same as the plots of terrain cut off by the blind explorer. Moreover, the information possessed by a blind explorer at any time about the outer unexplored region of the random surface was exactly the same as the information possessed by a bacterial colony. The only difference between the two was that while the bacterial colony grew from all points on its outer boundary at once, the blind explorer’s SLE path could grow only from the tip.
In a paper posted online in 2013, Sheffield and Miller imagined what would happen if, every few minutes, the blind explorer were magically transported to a random new location on the boundary of the territory she had already visited. By moving all around the boundary, she would be effectively growing her path from all boundary points at once, much like the bacterial colony. Thus they were able to take something they could understand — how an SLE curve proceeds on a random surface — and show that with some special configuring, the curve’s evolution exactly described a process they hadn’t been able to understand, random growth. “There’s something special about the relationship between SLE and growth,” said Sheffield. “That was kind of the miracle that made everything possible.”
The distance structure imposed on LQG surfaces through the precise understanding of how random growth behaves on those surfaces exactly matched the distance structure on the Brownian map. As a result, Sheffield and Miller merged two distinct models of random two-dimensional shapes into one coherent, mathematically understood fundamental object.
Turning Randomness Into a Tool
Sheffield and Miller have already posted the first two papers in their proof of the equivalence between LQG and the Brownian map on the scientific preprint site arxiv.org; they intend to post the third and final paper later this summer. The work turned on the ability to reason across different random shapes and processes — to see how random noncrossing curves, random growth, and random two-dimensional surfaces relate to one another. It’s an example of the increasingly sophisticated results that are possible in the study of random geometry.
“It’s like you’re in a mountain with three different caves. One has iron, one has gold, one has copper — suddenly you find a way to link all three of these caves together,” said Sheffield. “Now you have all these different elements you can build things with and can combine them to produce all sorts of things you couldn’t build before.”
Many open questions remain, including determining whether the relationship between SLE curves, random growth models, and distance measurements holds up in less-rough versions of LQG surfaces than the one used in the current paper. In practical terms, the results by Sheffield and Miller can be used to describe the random growth of real phenomena like snowflakes, mineral deposits, and dendrites in caves, but only when that growth takes place in the imagined world of random surfaces. It remains to be seen whether their methods can be applied to ordinary Euclidean space, like the space we live in.
This article was reprinted on Wired.com
Posted 17 August 2016 - 02:59 AM
Moonshine Master Toys With String Theory
The physicist-mathematician Miranda Cheng is working to harness a mysterious connection between string theory, algebra and number theory
After the Eyjafjallajökull volcano erupted in Iceland in 2010, flight cancellations left Miranda Cheng stranded in Paris. While waiting for the ash to clear, Cheng, then a postdoctoral researcher at Harvard University studying string theory, got to thinking about a paper that had recently been posted online. Its three coauthors had pointed out a numerical coincidence connecting far-flung mathematical objects. “That smells like another moonshine,” Cheng recalled thinking. “Could it be another moonshine?”
She happened to have read a book about the “monstrous moonshine,” a mathematical structure that unfolded out of a similar bit of numerology: In the late 1970s, the mathematician John McKay noticed that 196,884, the first important coefficient of an object called the j-function, was the sum of one and 196,883, the first two dimensions in which a giant collection of symmetries called the monster group could be represented. By 1992, researchers had traced this farfetched (hence “moonshine”) correspondence to its unlikely source: string theory, a candidate for the fundamental theory of physics that casts elementary particles as tiny oscillating strings. The j-function describes the strings’ oscillations in a particular string theory model, and the monster group captures the symmetries of the space-time fabric that these strings inhabit.
By the time of Eyjafjallajökull’s eruption, “this was ancient stuff,” Cheng said — a mathematical volcano that, as far as physicists were concerned, had gone dormant. The string theory model underlying monstrous moonshine was nothing like the particles or space-time geometry of the real world. But Cheng sensed that the new moonshine, if it was one, might be different. It involved K3 surfaces — the geometric objects that she and many other string theorists study as possible toy models of real space-time.
By the time she flew home from Paris, Cheng had uncovered more evidence that the new moonshine existed. She and collaborators John Duncan and Jeff Harvey gradually teased out evidence of not one but 23 new moonshines: mathematical structures that connect symmetry groups on the one hand and fundamental objects in number theory called mock modular forms (a class that includes the j-function) on the other. The existence of these 23 moonshines, posited in their Umbral Moonshine Conjecture in 2012, was proved by Duncan and coworkers late last year.
Meanwhile, Cheng, 37, is on the trail of the K3 string theory underlying the 23 moonshines — a particular version of the theory in which space-time has the geometry of a K3 surface. She and other string theorists hope to be able to use the mathematical ideas of umbral moonshine to study the properties of the K3 model in detail. This in turn could be a powerful means for understanding the physics of the real world where it can’t be probed directly — such as inside black holes. An assistant professor at the University of Amsterdam on leave from France’s National Center for Scientific Research, Cheng spoke with Quanta Magazine about the mysteries of moonshines, her hopes for string theory, and her improbable path from punk-rock high school dropout to a researcher who explores some of the most abstruse ideas in math and physics. An edited and condensed version of the conversation follows.
Posted 18 August 2016 - 02:58 AM
Scientific data sets are becoming more dynamic, requiring new mathematical techniques on par with the invention of calculus.
imon DeDeo, a research fellow in applied mathematics and complex systems at the Santa Fe Institute, had a problem. He was collaborating on a new project analyzing 300 years’ worth of data from the archives of London’s Old Bailey, the central criminal court of England and Wales. Granted, there was clean data in the usual straightforward Excel spreadsheet format, including such variables as indictment, verdict, and sentence for each case. But there were also full court transcripts, containing some 10 million words recorded during just under 200,000 trials.
“How the hell do you analyze that data?” DeDeo wondered. It wasn’t the size of the data set that was daunting; by big data standards, the size was quite manageable. It was the sheer complexity and lack of formal structure that posed a problem. This “big data” looked nothing like the kinds of traditional data sets the former physicist would have encountered earlier in his career, when the research paradigm involved forming a hypothesis, deciding precisely what one wished to measure, then building an apparatus to make that measurement as accurately as possible.
“In physics, you typically have one kind of data and you know the system really well,” said DeDeo. “Now we have this new multimodal data [gleaned] from biological systems and human social systems, and the data is gathered before we even have a hypothesis.” The data is there in all its messy, multi-dimensional glory, waiting to be queried, but how does one know which questions to ask when the scientific method has been turned on its head?
DeDeo is not the only researcher grapping with these challenges. Across every discipline, data sets are getting bigger and more complex, whether one is dealing with medical records, genomic sequencing, neural networks in the brain, astrophysics, historical archives or social networks. Alessandro Vespignani, a physicist at Northeastern University who specializes in harnessing the power of social networking to model disease outbreaks, stock market behavior, collective social dynamics, and election outcomes, has collected many terabytes of data from social networks such as Twitter, nearly all of it raw and unstructured. “We didn’t define the conditions of the experiments, so we don’t know what we are capturing,” he said.
Today’s big data is noisy, unstructured, and dynamic rather than static. It may also be corrupted or incomplete. “We think of data as being comprised of vectors – a string of numbers and coordinates,” said Jesse Johnson, a mathematician at Oklahoma State University. But data from Twitter or Facebook, or the trial archives of the Old Bailey, look nothing like that, which means researchers need new mathematical tools in order to glean useful information from the data sets. “Either you need a more sophisticated way to translate it into vectors, or you need to come up with a more generalized way of analyzing it,” Johnson said.
Vespignani uses a wide range of mathematical tools and techniques to make sense of his data, including text recognition. He sifts through millions of tweets looking for the most relevant words to whatever system he is trying to model. DeDeo adopted a similar approach for the Old Bailey archives project. His solution was to reduce his initial data set of 100,000 words by grouping them into 1,000 categories, using key words and their synonyms. “Now you’ve turned the trial into a point in a 1,000-dimensional space that tells you how much the trial is about friendship, or trust, or clothing,” he explained.
Posted 21 August 2016 - 09:14 PM
Remember the graph paper you used at school, the kind that's covered with tiny squares? It's the perfect illustration of what mathematicians call a "periodic tiling of space", with shapes covering an entire area with no overlap or gap. If we moved the whole pattern by the length of a tile (translated it) or rotated it by 90 degrees, we will get the same pattern. That's because in this case, the whole tiling has the same symmetry as a single tile. But imagine tiling a bathroom with pentagons instead of squares – it's impossible, because the pentagons won't fit together without leaving gaps or overlapping one another.
Patterns (made up of tiles) and crystals (made up of atoms or molecules) are typically periodic like a sheet of graph paper and have related symmetries. Among all possible arrangements, these regular arrangements are preferred in nature because they are associated with the least amount of energy required to assemble them. In fact we've only known that non-periodic tiling, which creates never-repeating patterns, can exist in crystals for a couple of decades. Now my colleagues and I have made a model that can help understand how this is expressed.
In the 1970s, physicist Roger Penrose discovered that it was possible to make a pattern from two different shapes with the angles and sides of a pentagon. This looks the same when rotated through 72-degree angles, meaning that if you turn it 360 degrees full circle, it looks the same from five different angles. We see that many small patches of patterns are repeated many times in this pattern. For example in the graphic below, the five-pointed orange star is repeated over and over again. But in each case these stars are surrounded by different shapes, which implies that the whole pattern never repeats in any direction. Therefore this graphic is an example of a pattern that has rotational symmetry but no translational symmetry.
Read more at: http://phys.org/news...tterns.html#jCp
Posted 21 August 2016 - 09:56 PM
Figure A shows a representation of a stable sequential working memory; different information items or memory patterns are shown in different colors. Credit: Image adopted from Rabinovich, M.I. et al. (2014)
Try to remember a phone number, and you're using what's called your sequential memory. This kind of memory, in which your mind processes a sequence of numbers, events, or ideas, underlies how people think, perceive, and interact as social beings.
"In our life, all of our behaviors and our process of thinking is sequential in time," said Mikhail Rabinovich, a physicist and neurocognitive scientist at the University of California, San Diego.
To understand how sequential memory works, researchers like Rabinovich have built mathematical models that mimic this process. In particular, he and a group of researchers have now mathematically modeled how the mind switches among different ways of thinking about a sequence of objects, events, or ideas, which are based on the activity of so-called cognitive modes.
The new model is described in the journal Chaos, from AIP Publishing, and it may help scientists understand a variety of human psychiatric conditions that may involve sequential memory, including obsessive-compulsive disorder, bipolar, and attention deficit disorder, schizophrenia and autism.
Cognitive modes are the basic states of neural activity. Thinking, perceiving, and any other neural activity incorporates various parts of the brain that work together in concert. When and where this activity occurs takes on well-defined patterns. And these patterns are called cognitive modes.
To understand what the researchers modeled, Rabinovich explained, think of the modes as competing figure skaters. You can describe the skaters in three ways. First, you can consider their backgrounds: their names or where they come from. You can also describe the technical aspects of their performances—how well they did that triple-toe-loop, for instance. Finally, you can understand their skating from a purely emotional or aesthetic perspective: their facial expressiveness, their costumes, or the simple beauty of movement. To fully comprehend the skaters, you have to constantly switch among these three perspectives. When these perspectives describe cognitive modes, they're called modalities.
Read more at: http://phys.org/news...iatric.html#jCp
- Whereas likes this
Posted 22 August 2016 - 09:23 PM
A proof marks the end of an era in the study of three-dimensional shapes.
Thirty years ago, the mathematician William Thurston articulated a grand vision: a taxonomy of all possible finite three-dimensional shapes.
Thurston, a Fields medalist who spent much of his career at Princeton and Cornell, had an uncanny ability to imagine the unimaginable: not just the shapes that live inside our ordinary three-dimensional space, but also the far vaster menagerie of shapes that involve such complicated twists and turns that they can only fit into higher-dimensional spaces. Where other mathematicians saw inchoate masses, Thurston saw structure: symmetries, surfaces, relationships between different shapes.
“Many people have an impression, based on years of schooling, that mathematics is an austere and formal subject concerned with complicated and ultimately confusing rules,” he wrote in 2009. “Good mathematics is quite opposite to this. Mathematics is an art of human understanding. … Mathematics sings when we feel it in our whole brain.”
At the core of Thurston’s vision was a marriage between two seemingly disparate ways of studying three-dimensional shapes: geometry, the familiar realm of angles, lengths, areas and volumes, and topology, which studies all the properties of a shape that don’t depend on precise geometric measurements — the properties that remain unchanged if the shape gets stretched and distorted like Silly Putty.
To a topologist, the surface of a frying pan is equivalent to that of a table, a pencil or a soccer ball; the surface of a coffee mug is equivalent to a doughnut surface, or torus. From a topologist’s point of view, the multiplicity of two-dimensional shapes — that is, surfaces — essentially boils down to a simple list of categories: sphere-like surfaces, toroidal surfaces, and surfaces like the torus but with more than one hole. (Most of us think of spheres and tori as three-dimensional, but because mathematicians think of them as hollow surfaces, they consider them two-dimensional objects, measured in terms of surface area, not volume.)
Thurston’s key insight was that it is in the union of geometry and topology that three-dimensional shapes, or “three-manifolds,” can be understood. Just as the topological category of “two-manifolds” containing the surfaces of a frying pan and a pencil also contains a perfect sphere, Thurston conjectured that many categories of three-manifolds contain one exemplar, a three-manifold whose geometry is so perfect, so uniform, so beautiful that, as Walter Neumann of Columbia University is fond of saying, it “rings like a bell.” What’s more, Thurston conjectured, shapes that don’t have such an exemplar can be carved up into chunks that do.
In a 1982 paper, Thurston set forth this “geometrization conjecture” as part of a group of 23 questions about three-manifolds that offered mathematicians a road map toward a thorough understanding of three-dimensional shapes. (His list had 24 questions, but one of them, still unresolved, is more of an intriguing side alley than a main thoroughfare.)
“Thurston had this enormous talent for asking the right questions,” said Vladimir Markovic, a mathematician at the California Institute of Technology. “Anyone can ask questions, but it’s rare for a question to lead to insight and beauty, the way Thurston’s questions always seemed to do.”
These questions inspired a new generation of mathematicians, dozens of whom chose to pursue their graduate studies under Thurston’s guidance. Thurston’s mathematical “children” manifest his style, wrote Richard Brown of Johns Hopkins University. “They seem to see mathematics the way a child views a carnival: full of wonder and joy, fascinated with each new discovery, and simply happy to be a part of the whole scene.”
In the decades after Thurston’s seminal paper appeared, mathematicians followed his road map, motivated less by possible applications than by a realization that three-manifolds occupy a sweet spot in the study of shapes. Two-dimensional shapes are a bit humdrum, easy to visualize and categorize. Four-, five- and higher-dimensional shapes are essentially untamable: the range of possibilities is so enormous that mathematicians have limited their ambitions to understanding specialized subclasses of them. For three-dimensional shapes, by contrast, the structures are mysterious and mind-boggling, but ultimately knowable.
As Thurston’s article approached its 30th anniversary this year, all but four of the 23 main questions had been settled, including the geometrization conjecture, which the Russian mathematician Grigori Perelman proved in 2002 in one of the signal achievements of modern mathematics. The four unsolved problems, however, stubbornly resisted proof.
“The fact that we couldn’t solve them for so long meant that something deep was going on,” said Yair Minsky, of Yale University.
Finally, in March, Ian Agol, of the University of California at Berkeley, electrified the mathematics community by announcing a proof to “Wise’s conjecture,” which settled the last four of Thurston’s questions in one stroke.
Mathematicians are calling the result the end of an era.
“The vision of three-manifolds that Thurston articulated in his paper, which must have looked quite fantastic at the time, has now been completely realized,” said Danny Calegari, of the California Institute of Technology. “His vision has been remarkably vindicated in every way: every detail has turned out to be correct.”
“I used to feel that there was certain knowledge and certain ways of thinking that were unique to me,” Thurston wrote when he won a Steele mathematics prize this year, just months before he died in August at 65. “It is very satisfying to have arrived at a stage where this is no longer true — lots of people have picked up on my ways of thought, and many people have proven theorems that I once tried and failed to prove.”
Agol’s result means that there is a simple recipe for constructing all compact, hyperbolic three-manifolds — the one type of three-dimensional shape that had not yet been fully explicated.
“In a precise sense, we now understand what all three-manifolds look like,” said Henry Wilton, of University College London. “This is the culmination of a massive success story in mathematics.”
read more at...
Posted 18 September 2016 - 03:30 AM
The Extraordinary Link Between Deep Neural Networks and the Nature of the Universe
Nobody understands why deep neural networks are so good at solving complex problems. Now physicists say the secret is buried in the laws of physics.
- by Emerging Technology from the arXiv
- September 9, 2016
In the last couple of years, deep learning techniques have transformed the world of artificial intelligence. One by one, the abilities and techniques that humans once imagined were uniquely our own have begun to fall to the onslaught of ever more powerful machines. Deep neural networks are now better than humans at tasks such as face recognition and object recognition. They’ve mastered the ancient game of Go and thrashed the best human players.
But there is a problem. There is no mathematical reason why networks arranged in layers should be so good at these challenges. Mathematicians are flummoxed. Despite the huge success of deep neural networks, nobody is quite sure how they achieve their success.
Today that changes thanks to the work of Henry Lin at Harvard University and Max Tegmark at MIT. These guys say the reason why mathematicians have been so embarrassed is that the answer depends on the nature of the universe. In other words, the answer lies in the regime of physics rather than mathematics.
First, let’s set up the problem using the example of classifying a megabit grayscale image to determine whether it shows a cat or a dog.
Such an image consists of a million pixels that can each take one of 256 grayscale values. So in theory, there can be 2561000000 possible images, and for each one it is necessary to compute whether it shows a cat or dog. And yet neural networks, with merely thousands or millions of parameters, somehow manage this classification task with ease.
In the language of mathematics, neural networks work by approximating complex mathematical functions with simpler ones. When it comes to classifying images of cats and dogs, the neural network must implement a function that takes as an input a million grayscale pixels and outputs the probability distribution of what it might represent.
The problem is that there are orders of magnitude more mathematical functions than possible networks to approximate them. And yet deep neural networks somehow get the right answer.
Now Lin and Tegmark say they’ve worked out why. The answer is that the universe is governed by a tiny subset of all possible functions. In other words, when the laws of physics are written down mathematically, they can all be described by functions that have a remarkable set of simple properties.
- Whereas likes this
Posted 23 September 2016 - 01:08 AM
Nobody understands the brain’s wiring diagram, but the tools of algebraic topology are beginning to tease it apart.
- by Emerging Technology from the arXiv
- August 24, 2016
The human connectome is the network of links between different parts of the brain. These links are mapped out by the brain’s white matter—bundles of nerve cell projections called axons that connect the nerve cell bodies that make up gray matter.
The conventional view of the brain is that the gray matter is primarily involved in information processing and cognition, while white matter transmits information between different parts of the brain. The structure of white matter—the connectome—is essentially the brain’s wiring diagram.
This structure is poorly understood, but there are several high-profile projects to study it. This work shows that the connectome is much more complex than originally thought. The human brain contains some 1010 neurons linked by 1014 synaptic connections. Mapping the way this link together is a tricky business, not least because the structure of the network depends on the resolution at which it is examined.
This work is also uncovering evidence that the white matter plays a much more significant role than first thought in learning and coordinating the brain’s activity. But exactly how this role is linked to the structure is not known.
So understanding this structure over vastly different scales is one of the great challenges of neuroscience; but one that is hindered by a lack of appropriate mathematical tools.
Today, that looks set to change thanks to the mathematical field of algebraic topology, which neurologists are gradually coming to grips with for the first time. This discipline has traditionally been an arcane pursuit for classifying spaces and shapes. Now Ann Sizemore at the University of Pennsylvania and a few pals show how it is beginning to revolutionize our understanding of the connectome.
- Whereas likes this
Posted 27 June 2017 - 06:56 AM
First time posting here...
Optimal Universal origami folding with more practical results
At the Symposium on Computational Geometry in July, Erik Demaine and Tomohiro Tachi of the University of Tokyo will announce the completion of a quest that began with a 1999 paper: a universal algorithm for folding origami shapes that guarantees a minimum number of seams.
- Unity likes this
0 user(s) are reading this topic
0 members, 0 guests, 0 anonymous users