|Publications Meetings The Profession Membership Programs Math Samplings Policy & Advocacy In the News About the AMS|
On Media Coverage of Math
Edited by Mike Breen and Annette Emerson, AMS Public Awareness Officers
Harvey Mudd College President (and mathematician) Maria Klawe, a "fascinating Angeleno" (Photo © Kevin Mapp/Harvey Mudd College.)
"The news should start with mathematics, then poetry, and move down from there," from The Humans, by Matt Haig.
See also: The AMS Blog on Math Blogs: Two mathematicians tour the mathematical blogosphere. Editors Brie Finegold and Evelyn Lamb, both PhD mathematicians, blog on blogs that have posts related to mathematics research, applied mathematics, mathematicians, math in the news, mathematics education, math and the arts, and more. Recent posts: "Crowd-Funded Mathematics," "Discovering Proofs," and "Narrowing The Gender Gap."
This brief article presents an interview with marine biologist Asha de Vos, who is working on ways to prevent Sri Lankan blue whales from colliding with large ships. She and her collaborators are developing mathematical models "to understand which areas are key to the survival of the Sri Lankan whales, and where they intersect with shipping lanes." de Vos says, "Using sighting locations and satellite data on temperature, chlorophyll concentrations and salinity, we build habitat models.... When we know which conditions attract whales, we can identify other areas in which they might gather, then overlay the shipping lanes to see where the ships and the whales come together. Then we can target those areas for reducing the likelihood of collisions."
See: "I'm modelling whale ways to keep them safe from ships: Interview with Asha de Vos", by Madhumita Venkataramanan, New Scientist, 7 July 2014, and for more research on blue whales, "Inter-Annual Variability in Blue Whale Distribution off Southern Sri Lanka between 2011 and 2012," by Asha de Vos, Charitha B. Pattiaratchi, and Robert G. Harcourt, Journal of Marine Sciences and Engineering, 1 July 2014.
--- Allyn Jackson (Posted 7/8/14)
If you could choose any eleven mathematicians from history to play on your World Cup team, which geometer would be your striker? If you were stuck on a desert island for an indeterminate amount of time, and you could only bring three mathematicians with you, which number theorist would go in your suitcase? If you had to eat only one mathematician for the rest of your life, which probabilist would be on your plate? We all have our own personal list of the top 100 mathematicians of all time (see Greatest Mathematicians of All Time), and we've all debated it with our friends and fought about it with our spouses.
Kiersz's list isn't a top 13--he focuses on mathematicians whose research presaged modern innovations in mathematics and its applications--but he makes some thought-provoking choices. There are the few indisputable greats: Newton, Liebniz, Gauss, Euler, and Euclid are probably in everyone's top 10. Most people would include Pythagoras somewhere, if only for the ubiquitous Pythagorean theorem, and his followers' extreme intolerance for irrational numbers. Descartes for sure, because where would math be without the Cartesian plane? But what about Archimedes? Did you know that he used limit-like arguments to estimate the value of pi and the area under a parabolic curve almost two millenia before calculus was invented? Not to mention he was probably the first, and certainly the most famous, person to celebrate a breakthrough (his understanding of how solids displace liquids) by running through the streets naked shouting 'Eureka!' And it's a tragedy of cultural imperialism that Muhammed ibn Musa al-Khwarizmi, the namesake of the algorithm and father of algebra, mostly goes forgotten. John Napier, the little-known father of the controversial logarithm (it's an acquired taste), is another interesting pick.
Kiersz largely leaves out the modern era, but there are plenty of players, er, scholars, from the past three centuries worthy of attention. Bernhard Riemann, a star of the German team in the mid-1800s, made three great accomplishments, any one of which would put him on the list: formalizing what's now known as the Riemann integral; founding what's now known as Riemannian geometry; and, in a single short paper, introducing the world to the Riemann zeta function and the outstanding Riemann hypothesis. Or take Poincare, who E.T. Bell called "the last universalist": the father of chaos theory, the Poincare group, and the recently-proved eponymous conjecture, is surely one of the best. Of course, it's hard to narrow down any list of the all-time greats, because each mathematician has his own style (not to mention his own field). Analysts might include Cantor, Weierstrass, Cauchy, and Fourier; probabilists would surely argue for Pascal and Bayes, not to mention the Russian giant Kolmogorov; geometers and physicists might tout the accomplishments of Hamilton, Noether, Lie, Weyl, and Einstein; and number theorists would surely give Ramanujan a nod. The fathers of computer science--Babbitt, Lovelace, Turing and von Neumann--certainly deserve a place. And who could forget the prolific combinatorialist Paul Erdős? Then there are those working today, or very recently, whose accomplishments, in time, might be as impactful as those of their predecessors--mathematicians like Fermat-slayer Andrew Wiles, chaos-theorist and topologist Steven Smale, analyst Terrence Tao, probabilist Oded Schramm, and *cough* Benjamin Pittman-Polletta. They may not all be on your list, but there's definitely room for them on mine.
See "The 12 Mathematicians Who Set The Stage For The Modern World", by Andy Kiersz, Business Insider, 3 July 2014.
--- Ben Pittman-Polletta (Posted 7/11/14)
Just when I thought I my excitement over the World Cup couldn't reach a more fevered pitch, a recent piece in The New York Times explains that along with the exhilaration of victory and the subsequent swelling of national pride, we are seeing economic principles at play. London School of Economics professor and die-hard soccer--or rather, football--fan, Ignacio Palacios-Huerta, breaks it down.
Palacios-Huerta analyzed over 9,000 penalty kicks and found that they exhibited precisely the strategic behavior predicted by the economist John Forbes Nash, Jr. His principle, called Nash Equilibrium, says that for a game that requires multiple zero-sum moves, the best strategy is to vary your moves randomly. Penalty kicks, where one player takes a shot directly on goal, are a great example of this sort of game. Obviously it would be silly for a player to always shoot to the left or right, because then the goalie would know what to expect. But then again, a player might prefer to shoot with a dominant leg, so we wouldn't expect to see a perfect 50-50 split. And indeed Palacious-Herta found that players varied their ball placement exactly as he predicted: 60% to the right, 40% to the left.
Soccer games, Palacios-Huerta claims, are also a great place to test the Efficient Market Hypothesis. In the stock market, this principle basically says that the market reacts so quickly that the price of a stock always reflects its true value based on all known information. So if this is true, no matter how fast you buy, you can never really get a stock for less than it's worth. It should also mean that the price of a stock should stay relatively still in the absence of new information. Of course this is just a hypothesis, and quite difficult to test in the real stock market.
To support this hypothesis, researchers Karen Croxson and J. James Reade found that in soccer, even if a goal was scored moments before the clock stopped, half-time betting remained relatively stable through the break. Although people continue to place bets over halftime, the prices for betting updated "swiftly and fully" within moments of the goal.
But then again, sometimes--er, think the infamous Brazil vs. Coratia penalty shot--there are subtleties that seem beyond the scope of the most sophisticated numerical methods. Palacios-Heurtas covers this, along with the economics of fear and corruption in his book "Beautiful Game Theory: How Soccer Can Help Economics."
Oh, and of course, GO USA!
See: "The Beautiful Data Set," by Iganatio Palacios-Huerta. The New York Times, 13 June 2014.
--- Anna Haensch (Posted 6/25/14)
Richard Tapia is an outstanding applied mathematician and a first-generation Mexican-American who has worked hard to recruit other members of minority groups into mathematics and science. This article is a profile of Tapia that offers a window into how the mathematical community regards the need to encourage members of underrepresented minorities to join the field. The article reports that Tapia has been going around the country giving a talk titled "Racism in Mathematics: A Direct Factor in Underrepresentation." No doubt the title rankles some, and Tapia acknowledges that he chose it partly to get attention. Nevertheless he does believe that students from underrepresented minorities face extra hurdles that students who are white or Asian do not. For more than 40 years, Tapia has worked energetically and on many fronts to remove those hurdles and get more minority students on track to positions in top mathematics departments. His efforts have been recognized by many awards, including the 2014 Vannevar Bush Award, the highest honor the National Science Foundation gives for lifetime achievement. Photo: Rice University.
See "Minority voice," by Jeffrey Mervis. Science, 6 June 2014, pages 1076-1079 (unfortunately, you need an account to read the article online). (Posted 6/18/14)
---Allyn Jackson (Posted June 18, 2014)
If you are a math fan or an art fan (or both), I hope you have a couple of hours to spare, because it will be easy to lose them in this rambling tour of technology in art. Author Ron Miller aims to trace the history of computer-generated imagery, but he takes the form's ancestors to be all applications of science to the visual arts, beginning with the development of perspective by Fillipo Brunelleschi in the early 1400s, and the subsequent creation of specialized machinery for perspective drawing by Albrecht Durer. Miller serves up chestnuts of art technology like the pantograph--a mechanical linkage which reproduces the movements of a pen at a larger scale; the camera obscura or pin-hole camera--a dark room or box with a hole in one side, used to project an image for tracing; and, of course, the plain vanilla (phoneless) camera. There's an interesting discussion on the invention of the airbrush (in the 1800s) and the scorn it earned from professional artists, and a neat digression on the early uses of mathematics to transform an (analog) input data stream into interesting images, with tools like the oscilloscope and the spirograph. Miller discusses the creation of ASCII art and the mechanical plotter as ways to use the earliest analog computers to generate images. And among countless other artists and inventions in the prehistory of digital and computer art, Miller notes that the first computer graphics seem to have appeared around the same time as the first computers. Filmmaker John Whitney Sr., creator of the title sequence for Vertigo, founded Motion Graphics Incorporated in 1960. Using a 12-foot tall computer he built himself from surplus military electronics, Whitney developed a catalog of special effects which, while hard to sit still through today, I can only imagine blew the socks and garters off of Eisenhower-era moviegoers.
There are lots of jumping-off points here, but the breathless pace leaves out plenty of details. I personally could (did) spend hours looking into the science and math behind Robert Hooke and Ernst Chladni's technique for revealing the nodal lines of a vibrating surface, recently captured in a video that made the rounds on Facebook; without the cheesy music; with a bunch of explanation words. Not surprisingly, there are deep connections between these nodal patterns and the normal modes they arise from, quantum numbers and molecular orbitals, and representation theory. Or, if that's not your speed, you might spend the rest of your workday in the Digital Art Museum, an online museum currently hosting an exhibition of plotter drawings from the 1960s, including works by many of the pioneering artists mentioned in Miller's article. You can also explore more contemporary computer artists, like Yoichiro Kawaguchi. Along with computer scientist Jim Blinn, Kawaguchi brought softer shapes into computer art with the creation of so-called metaballs in the 1980s (also "Real time Metaballs" video). Kawaguchi's recent work is even more organic and colorful, including computer-designed "robots" which look like the intelligent descendents of tropical mollusks. Whether or not you're a fan of CGI, there's plenty here to stave off boredom and/or productivity--so be warned.
See: "The Forgotten History Of CGI," by Ron Miller. Io9, 31 May 2014.
--- Ben Pitman-Polletta (Posted 6/10/14)
Image: Prof. Mark Whitehorn, School of Computing, University of Dundee.
Some problems are too hard to work out exactly, even if the answers sought are only probabilistic. Unfortunately, this seems to hold for many of the questions one might seek answers to, such as the probable outcomes of business decisions, government policies, or medical treatments. Furthermore, even when exact (deterministic) solutions exist, time constraints may make them impractical, and even unnecessary. Here, Mark Whitehorn describes how mathematicians and computer scientists employ Monte Carlo simulations as computational tools to get good estimates of probabilities, including the outcomes of risky decisions.
The Monte Carlo idea is as simple as simple gets: if you want to understand the distribution of some quantity which is a function of one or more random variables whose distributions are known, one way to estimate the distribution is to calculate it for many (randomly chosen) realizations of the underlying random variables. Monte Carlo can also be used to estimate the outcomes of deterministic processes, if those processes can be stated in a probabilistic framework. For example, pi can be thought of as the probability that a point sampled uniformly from the unit square will fall in the unit circle; pi can thus be estimated from a Monte Carlo simulation, in which points are picked randomly from the unit square, and the number of points falling in the unit circle is calculated. The Monte Carlo class of techniques encompasses a wide variety of approaches to using probability and computation to estimate difficult calculations. These techniques are especially useful for estimating quantities that depend on a large number of interacting random variables--which often occurs in quantum mechanics, engineering, meteorology, and computational biology--and when modeling phenomena with significant uncertainty in the inputs--often the case when studying human systems in business, economics, and politics.
While useful, Monte Carlo methods are by no means new. They were developed in 1946, while Stanislaw Ulam was working on the Manhattan Project. Ulam spent some time laid up in bed, playing solitaire. He began wondering what the probability of a winning shuffle is, and after wrestling with the gnarly combinatorics, he "wondered whether a more practical method than 'abstract thinking' might not be to lay it out say one hundred times and simply observe and count the number of successful plays." Ulam immediately grasped that recent developments in computer technology made it possible to use a similar method to understand other problems in mathematical physics, and he and John von Neumann developed these ideas, and used them to estimate a number of calculations essential to the Manhattan Project. They needed a code name for the project, and called it Monte Carlo, after the casino that Ulam's uncle frequented, and in a nod to the randomness required to make the method work.
See "About to make a big bet? Don't crash out, cash in with the power of maths," by Mark Whitehorn, The Register, 29 May 2014.
--- Ben Pittman-Polletta (Posted 6/25/14)
Image: Courtesy of Flickr/frankblacknoir.
Soap bubbles have always been fun pastime of children, and according to a recent article in Phys.org, they have also become the primary object of study for a team of researchers for the University of Cambridge. In particular, researchers are studying soap films of all different shapes and trying to predict where the points of instability will occur when the films are stretched and twisted to the point of collapse.
The group has done a thorough experimental, computational, and theoretical examination of the topologies of different soap films in order to conjecture where the point of collapse will occur. Their conjecture: the point of instability depends entirely on the location of something called the systole.
A systole is the minimum-length closed curve on the surface which can't be shrunk down to a point--by a closed closed curve, I mean a curve that you can start walking along and eventually arrive back where you started. Perhaps the simplest example of a soap film is the catenoid, which you get from dipping a circular wire in soap and slowly lifting it.
Image: Courtesy of Flickr/hoyasmeg
In the case of a catenoid, you probably guessed that the film would break at the thinnest point of the surface, and you would be correct! This point is the systole. Check out this video to see what it would look like in slow motion.
That point where the soap film breaks is called the singularity, and it is of great interests to scientists in many fields to understand where and why these occur. Singularities can be used to explain many phenomena in the natural world, including fluid turbulence and solar flares.
The team has recognized that systoles can occur in one of two ways. Either, like the catenoid, it occurs in the bulk of the film, or like in the Möbius film in the image below, it occurs around the boundary. This, they believe, will precisely determine the behavior of the singularity.
Image: Courtesy of PNAS
The original paper appeared in the Proceedings of the National Academies of Science (PNAS).
See "Soap films with complex shapes shed light on the formation of mathematical singularities (w/ Video)." Phys.org, 27 May 2014.
--- Anna Haensch (posted 6/5/14)
Johanna Wanka, a mathematician, is federal research minister of Germany. She is in the midst of some tough political wrangling, as the federal government and state governments disagree about how research money should be spent. Germany's chancellor, Angela Merkel, is a physicist and Wanka thinks that a scientific background can help in politics: "We may think more...like in a chess game." The article authors conclude, "German scientists are waiting expectantly for her [Wanka's] next move."
See "Doing the math in Berlin," by Kai Kupferschmidt and Gretchen Vogel. Science, 23 May 2014, pages 791-792.
--- Mike Breen
Image: Wikimedia Commons, courtesy of rangevoting.org
When it comes to politics, humans are really bad at being fair and unbiased. To see evidence of this, people need not look much farther than their own state's congressional districting map. A recent article in the Chicago Sun-Times compares the current congressional district map of Illinois, which looks like somebody gave a kid a crayon and set him loose, to a proposed congressional map of Illinois designed using a simple mathematical algorithm. I'll let you guess which one is which.
The reason that congressional district maps got to be so crazy-looking in the first place, is due to something called gerrymandering. Gerrymandering refers to the practice of redrawing district lines to create more favorable outcomes for a particular political party. For the quick backstory on gerrymandering--including where the heck the word gerrymander comes from--check out this short video from TED-Ed.
So the bottom line: gerrymandering is bad. In order to have an honest and representative democracy, we need to find a way to break a state into districts that gives an accurate picture of the state's population, and more importantly, can't be tampered with by meddling human hands. One possible solution, which we are looking at in the right-hand map above, is the brainchild of Warren D. Smith, co-founder of rangevoting.org and co-inventor of the Splitline algorithm.
Smith's algorithm goes as follows: suppose we start with a state having population P, which needs to be broken into D distinct districts. The Splitline algorithm says, first define A to be the nearest integer less than D/2 and B the nearest integer above D/2. So for examples, if D=3, then A=1 and B=2. Next we divide the state into two pieces with populations PA and PB, whose populations are in ratio A:B, but in particular, we do this with the shortest straight line possible. So, for example, if we started with a state with population 30, we would have PA=10 and PB=20. Now we have two districts, and they each need to be broken up into A and B many districts, respectively, so we just repeat the splitting process above. Simple as that.
And simplicity, Smith argues, is precisely what makes this algorithm so strong. It is easy to understand, and therefore difficult to mess with, he says, "you want to have something so incredibly simple--something a child could understand. Something natural."Basically, by choosing districts in a mathematical way, we completely eliminate the opportunity for human intervention. For historical reasons, Smith explains, Illinois is a particularly egregious examples of gerrymandering and therefore could benefit from a more natural division of congressional districts. But for some states, like Colorado (below), which has a highly unusual population distribution, the Splitline algorithm isn't necessarily a great improvement over the current map, but Warren feels confident that it's certainly no worse.
Image: Wikimedia Commons, right: courtesy of rangevoting.org
Check out how the Splitline algorithm changes the look of your own congressional district. In addition to the his algorithm, Smith's site hosts a comprehensive and easy-to-understand explanation of the drawbacks and benefits of different voting schemes, and a lengthy page of voting theory puzzles and brain teasers--enough to keep you amused until at least the next presidential primaries.
See "Can math save Illinois from gerrymandering?" by Scarlett Swerdlow. Chicago Sun-Times, 18 May 2014.
--- Anna Haensch
Implementation of the Common Core Standards for math education--a system 44 states have adopted for codifying what mathematics skills student should learn when--has resulted in a backlash from parents frustrated by demands that students use new methods to attack simple arithmetic problems. Proponents of the new standards say the system is intended to help students be problem-solvers and understand the concepts behind arithmetic operations. On the other side, critics contend that grade school students are too young to grasp this "fuzzy math," and that focusing on concepts rather than basic techniques leaves them ill-prepared for the rigorous application needed for success at higher grade levels. Similarly, parents are aggravated that they are unable to help their children with homework using seemingly newfangled and unnecessary techniques. Some educators blame the implementation, rather than the standards themselves, and school districts have set up workshops and websites to help parents understand the new approach.
See "2+2=What? Parents rail against Common Core math." The Wall Street Journal, 15 May 2014 and a piece in USA Today by former AMS Executive Director John Ewing.
--- Lisa DeKeukeleare
This short biography of Harvey Mudd College President Maria Klawe (left) highlights how her humble beginnings with hand-me-down clothes and her post-college global wandering dovetailed into her passion for making Harvey Mudd a top-notch institution where students of all genders, races, and sexual orientations would feel embraced. Prior to running Harvey Mudd, Klawe was a renowned mathematician who had a successful career at IBM, the University of British Columbia, and Princeton, but who felt like others didn't think she belonged. Since taking over at Harvey Mudd in 2006 as the school's first female president, Klawe has raised the enrollment of women majoring in computer science from 10% to 40%, and her current focus is boosting participation for African-Americans and LGBT students. Klawe notes that she thinks of Harvey Mudd as "an innovation lab" for student participation, and she strives to create an environment in which any student willing to work hard will be supported. Photo © Kevin Mapp/Harvey Mudd College.
See "Maria Klawe: The Forward-Thinking President of Harvey Mudd College," by Liana Aghajanian. LA Weekly, 14 May 2014.
--- Lisa DeKeukeleare
In this article, writer Tracey Lien profiles 21-year-old Iranian game developer, Mahdi Bahrami, whose geometric puzzle game Engare was a hit at this year's Experimental Gameplay Workshop. At the workshop, Bahrami demonstrated an earlier version of Engare, in which the player is challenged to replicate a given shape, such as a spiral, by figuring out where on a moving object to place a dot so that the object will trace that shape. Workshop host Robin Hunicke, who first saw Bahrami's work in 2010, notes that "the drive to explore experimental mechanics is at [the] very core" of Bahrami's current work. At the same time, Bahrami has incorporated elements from his own culture, such as music, into this games. (His first game, Farsh, which he coded in two weeks, involves the building of paths using Persian carpets that can be rolled and unrolled.) Bahrami has also played up the fact that "the pattern making process explored through the geometric puzzles [mirrors] that of Islamic art making." After all, "Islamic art is all about mathematics," he notes. Image: Screen shot of Engare.
See "An Iranian developer's entrancing game about his culture and the mathematics of art," by Tracey Lien. Polygon, 12 May 2014.
--- Claudia Clark
In this brief article, math and science writer Dana Mackenzie describes a new solution to a problem that was presented 50 years ago in the pages of Scientific American: "Can you place seven cigarettes so that each cigarette touches every one of the others?" The problem was solved by Martin Gardner at the time, but required some of the cigarette sides to touch others' ends. This past March, at a Gathering 4 Gardner, Sándor Bozóki of the Hungarian Academy of Sciences presented a solution that he and two colleagues recently found using only side-to-side contacts and infinitely long cylinders. They "used three months of computer time to solve 20 equations in 20 variables," after which they used a "'certified' method developed by Isaac Newton, and updated by computer scientists in 2011, to prove that their solution was not an artifact of computer rounding errors." Photo by Dana Mackenzie.
See "A tale of touching tubes," by Dana Mackenzie. Science News, 3 May 2014 and a related story, although also short, "Meeting of the Puzzlers," by Dana Richards. Scientific American, June 2014, page 23.
--- Claudia Clark
From deciding who gets the last piece of pizza, to choosing which house gets to auction off a multimillion dollar collection, rock paper scissors is a ubiquitous tool for resolving competitive conflicts. But increasingly, the coin-less coin toss is moving out of conflict resolution and into professional sports, show business (Regular Show, Big Bang Theory), and even scientific research. In a recent preprint posted to the arXiv, a team of three (of course) physicists led by Zhijian Wang, from China's Zhejiang University, used experiment and theory to study the dynamics of RPS. Recruiting 60 groups of 6 undergraduates to play the game in 300-round, random-pair "tournaments" and record their moves, the team calculated the proportion of players in each of the game's three "states", at each round. In this situation, with limited information about their opponents' histories, the players were subtly guided by a pair of universal strategies. First, players were more likely to repeat their move than to shift to another, especially when they had just won. Second, players seemed to shift their moves in one direction--from rock, to paper, to scissors. This could be observed in "a deterministic pattern of social state cycling from slightly rich in action R, to slightly rich in P, then to slightly rich in S, and then back to slightly rich in R again." On an individual level, such a shift was more likely to occur if the player had lost on the previous round, in which case they would choose the move that had just beaten them. These behaviors can't be accounted for by classical "infinite rationality" game theories--for example, the Nash equilibrium strategy of choosing rock, paper, or scissors uniformly at random--or by strategies which are contingent only upon the player's previous moves. Instead, the authors proposed a "conditional response model" implementing a "win stay/lose shift" strategy--the kind of strategy that fits perfectly into an evolutionary framework. Plenty of people are touting the reverse strategy--which amounts to "lose, choose the unplayed move; win, copy your opponent"--as a sure-fire winner (reported in Slate and Ars Technica). But a few rounds of RPS will convince you that the strategy is easier said than done. Furthermore, the probability of a loser picking up the move of their opponent is only 40%, meaning Zhijian et al's universal strategy is more of a bias than a guarantee ("Social cycling and conditional responses in the Rock-Paper-Scissors game" on arxiv).
So RPS is safe, for now, and the conflict-prone among us will be happy to know there is no shortage of alternatives--not only the poignantly renamed "chicken-worm-human", but 5, 7, 9, 11, 15, 25, and 101 move expansions of RPS exist. But even if they haven't discovered a way to "always win at RPS", Zhijian and his colleagues have provided further evidence for the "bounded rationality" of human strategists--an understatement if I ever heard one.
See "How to win at rock-paper-scissors," by James Morgan, BBC News, 2 May 2014; "A Mathematician Explains How to Always Win at Rock-Paper-Scissors," by Harrison Jacobs, Business Insider, 5 May 2014; "Scientists find a winning strategy for rock-paper-scissors," by Casey Johnston, Ars Technica, 1 May 2014, and "Rock, Paper, Payoff: Child's Play Wins Auction House an Art Sale," by Carol Vogel, New York Times, 29 April 2005).
--- Ben Pittman-Polletta
British mathematician Roger Penrose's innovative method for creating infinitely non-repeating configurations of simple shapes not only was revolutionary mathematics, but also provided the foundation for improving everyday goods. In 1974, Penrose demonstrated using just two four-sided shapes--a "kite" and a "dart"--that it is possible to create a tiling pattern with no gaps or overlaps that would never repeat. In a field previously focused on repetitive tiling patterns with rectangles and triangles, Penrose's system was remarkable because a decision about placing a kite or a dart in one location would affect whether a kite or dart could be used at a separate location in the distance, so that local decisions affected global structure. Eight years later, an Israeli astrophysicist used Penrose's system to explain strange characteristics he observed in crystalline structures of aluminum-manganese alloy, which unlike typical crystals were non-repetitive. Properties of these special "quasicrystals" make them ideal for items such as nonstick cookware, razors, and 3D printers. [More on Penrose tiles is in a Feature Column by David Austin.]
See "Impossible Cookware and Other Triumphs of the Penrose Tile," by Patchen Barss. Nautilus--Symmetry Issue (issue 13), 1 May 2014.
--- Lisa DeKeukeleare
If you've ever had to share the last piece of cake with someone, you may have used the "I cut, you choose" method for dividing as fairly as possible. But when it comes to more complex situations, such as one person buying another person out of a business that both own, that method probably won't satisfy at least one of the parties. In this article, writer Erica Klarreich describes a few of the different types of fair division methods that exist--such as Fair Buy-Sell--and defines some of the properties of fair division methods. For example, a method is envy-free if "neither participant would willingly trade her share for the other share"; a method is equitable if all participants place the same value on their shares; and a method is considered to be efficient if there are no "other divisions that can improve some or all participants' shares without making anyone worse off." Mathematicians have proved that there is always a way to divide a piece of cake between two people that is envy-free, equitable, and efficient, even if there is "no simple algorithm for identifying it." However, "in some division problems, mathematicians have shown that no ideal split exists" and that different trade-offs are required, depending upon the type of division method.
See "Math Shall Set You Free--From Envy," by Erica Klarreich. Nautilus--Symmetry Issue (issue 13), 1 May 2014.
--- Claudia Clark
Given an arbitrary sequence of numbers, how can one tell if it is random? In this American Scientist feature, MIT quantum computer scientist Scott Aaronson attempts to answer this question, which I gather most mathematicians will find as infuriatingly ill-posed as I did. Probability theory tells us that the answer is "it depends"--every sequence of numbers is an output (even a highly probable output) of some tailored stochastic process or probability distribution--and, as Aaronson shows, the very same sequence of numbers is also the output of a suitably tailored deterministic process. Still, Aaronson makes a valiant effort to make rigorous the colloquial understanding of the word "random" to mean "patternless." For example, the sextuplets (3, 1, 4, 1, 5, 9) and (6, 6, 6, 6, 6, 6) are both equally probable results of picking six digits (uniformly) at random. And yet, neither of them seems random in the colloquial sense, because there are patterns to be discovered in these sequences. As Aaronson explains, this intuition can be made rigorous by a quantity known as the "Kolmogorov complexity." The Kolmogorov complexity of a sequence x is is the minimum length (in bits) of a computer program which ouputs x. So, roughly speaking, the Kolmogorov complexity of a sequence of numbers is how difficult it is to describe that sequence of numbers. The above sequences, and their continuations--the first n digits of pi and the digit 6 repeated n times--all have low Kolmogorov complexity. A long sequence that is more "patternless" would have high Kolmogorov complexity, as any program outputting it would have to contain nearly as many bits as required to write down the sequence itself. Unfortunately, the Kolmogorov complexity (as you might gather from that minimum over all computer programs) is not itself computable. So, there is no guaranteed way to determine the Kolmogorov complexity of a sequence, and the Kolmogorov complexity cannot be the general answer to the question of how to determine a "random" number. But randomness and patternlessness are aligned--ever more so as you get to longer and longer sequences. Indeed, as there are roughly twice as many n-bit sequences as there are n-bit computer programs, a simple counting argument shows that an arbitrary sequence is likely to have a Kolmogorov complexity greater than or equal to its length. So, as Aaronson points out, "if something is random, then it's random"--where "random" might, in fact, have two different meanings.
It is at this somewhat interesting and only mildly meaningless point that Aaronson has the bad manners to introduce us to the BosonSampler. A BosonSampler is an optical device that flings a pack of photons down a maze of waveguides and measures the (random) distribution of places they end up. "The problem," says Aaronson, "is that we don’t know how to do anything useful with a BosonSampler." This rather serious problem seems to be compounded by the fact that BosonSamplers are very hard to simulate, but this weakness turns out to be the misbegotten devices' single strength. For the BosonSampler, a kind of quantum computer, can be used--to prove that quantum computers can do something classical computers can't! Indeed, the first BosonSampler--a three-photon device--was recently built, and its output found to match theoretical predictions ("Photonic Boson Sampling in a Tunable Circuit," by Matthew A. Broome, et al). According to Aaronson, the construction of a BosonSampler large enough to exceed the capabilities of a classical computer awaits only the resolution of "major experimental challenges." At this point you may be tempted to condemn BosonSamplers as dangerously ill-considered objects with entirely appropriate initials. If you did, though, you would miss Aaronson's daring final question: If there were a BosonSampler, how could we be sure it was working? I'll rephrase that for the incredulous: If there were a machine that claimed to be sampling from the possible configurations of a BosonSampler passing a large number of photons, how could we be sure it wasn't just feeding us the digits of pi? It turns out that even this is a sticky wicket, when it comes to BosonSamplers. The answer, according to Aaronson, is to require the BosonSampler to return a configuration that has both a large probability of occurrence, and a large Kolmogorov complexity. This turns a "sampling" problem into a "searching" problem--something that Aaronson has recently proved can always be done. If you want to know how these searching problems are to be executed, given that the Kolmogorov complexity is uncomputable, then you have a more patient and tenacious mind than I, and I suggest you take up quantum computing.
--- Ben Pittman-Polletta (posted 6/3/14)
Comments: Email Webmaster