Saturday, August 30, 2014

Making houses out of mushrooms


Making houses out of mushrooms


Much of the construction industry depends on fossil fuels, creating a big carbon footprint. As pressure mounts to make construction "greener", experts have started to design houses out of hemp and straw, and bricks made of mushrooms.
From a distance, it looks like something out of a desert landscape, ancient and handmade.
The closer you get, the more you see something much more modern in the curves of this tower, assembled from 10,000 bricks.
But it is only when you examine one of those bricks close-up that you get a sense of what the future might hold. Using bioengineering, this structure has been made from mushrooms.
"This is a hybrid of what I call an ancient technology of mushrooms and a totally new technology of computation and engineering," says architect David Benjamin.
The mushroom - or mycelium, the vegetative part of the fungus - is an ideal material, Mr Benjamin explains.
These bricks score high marks for sustainability because they were "grown" with no carbon emissions and no waste.
The 40ft (12m) structure he is referring to currently sits in a courtyard at MoMA PS1, an art gallery in New York.
The mushroom brick is "grown" by mixing together chopped-up corn husks with mycelium.
The mixture is then put into a brick mould and left to grow for five days. The result is a brick that is solid, but lightweight.
The "mushroom tower" is then assembled using a custom algorithm to lay the bricks layer by layer.
This method lets builders use local materials like agricultural waste, and also makes the bricks biodegradable.
These particular bricks were created from materials in the New York area. But the method can travel. In places where rice is abundant, people can use rice hulls in the mixture with mycelium to create bricks.
How the mushroom house was built
  • Old cornstalks and parts of mushrooms were collected
  • The organic material was put into a mould and then allowed to set as bricks
  • The bricks were arranged to create the structure
  • Some of the blocks at the top of the building were covered in light-refracting film
Source: Museum of Modern Art
Mr Benjamin's belief in the power of biotechnology is evident in the name of his architectural firm, The Living.
"We want to use living systems as factories to grow new materials," he says. "Hopefully this will help us see cities more as living breathing organisms than solid, static, inert places."
Meanwhile another architect has also been growing "bio-bricks", using a different process.
Ginger Krieg Dosier is the creator of a brick made with sand and bacteria, filled into a mould and then fed with a nutrient solution. Five days later, the bricks are removed and ready to use.
The chemical reaction caused by this mixture "bio-cements" the grains together to create a solid brick.
This quest for the bio-brick took Ms Dosier from the world of architecture to science, where she consulted with microbiologists and chemists in order to come up with a formula.
"Even as a child, I have been fascinated with how nature is able to produce durable and structural cements in ambient temperatures," she says.
Her brick is now being used in a pilot project to make paving.
She worked for a while in the United Arab Emirates - where sand, of course, is plentiful - but has now relocated her company, BioMason, to North Carolina.
The work of Mr Benjamin and Ms Dosier point to a new level of innovation which some say is much needed in the building industry.
"While they are experimental, it is very exciting to see these types of leapfrog technologies that take cues from nature to find creative alternatives to some of the oldest conventions in design," says Jacob Kriss from the US Green Building Council.
The council is responsible for a rating system called LEED, which rewards sustainable design in buildings. Mr Kriss says the building sector is responsible for almost 40% of carbon dioxide emissions in the US.
"There is an unquestionable imperative to green our stock of both new and existing buildings," Mr Kriss says.
"It is these types of innovations that can help us turn the corner to create resilient, healthy, high-performing structures that are better for the planet and the people who use them every day."

Friday, August 29, 2014

Google building fleet of package-delivering drones

Google building fleet of package-delivering drones

Save

This undated image provided by Google shows a Project Wing drone vehicle during delivery. Photo / AP, Google

Google's secretive research laboratory is trying to build a fleet of drones designed to bypass earthbound traffic so packages can be delivered to people more quickly.
The ambitious program announced Thursday escalates Google's technological arms race with rival Amazon.com Inc., which also is experimenting with self-flying vehicles to carry merchandise bought by customers of its online store.
Amazon is mounting its own challenges to Google in online video, digital advertising and mobile computing in a battle that also involves Apple Inc.
Google Inc. calls its foray into drones "Project Wing."
Although Google expects it to take several more years before its fleet of drones is fully operational, the company says test flights in Australia delivered a first aid kit, candy bars, dog treats and water to two farmers after traveling a distance of roughly one kilometer, or just over a half mile, two weeks ago. Google's video of the test flight, set to the strains of the 1969 song "Spirit In The Sky," can be seenhere.
Besides perfecting their aerial technology, Google and Amazon still need to gain government approval to fly commercial drones in many countries, including the U.S. Amazon last month asked the Federal Aviation Administration for permission to expand its drone testing. The FAA currently allows hobbyists and model aircraft makers to fly drones, but commercial use is mostly banned.
Project Wing is the latest venture to emerge from Google's "X'' lab, which has also been working on self-driving cars as well as other far-flung innovations that company CEO Larry Page likens to "moonshots" that push the technological envelope. The lab's other handiwork includes Internet-connected eyewear called Google Glass, Internet-beaming balloons called Project Loon and a high-tech contact lens that monitors glucose levels in diabetics.
Google says it is striving to improve society through the X's lab's research, but the Glass device has faced criticism from privacy watchdogs leery of the product's ability to secretly record video and take pictures. Investors also have periodically expressed frustration with the amount of money that Google has been pouring into the X lab without any guarantee the products will ever pay off.
A team led by Massachusetts Institute of Technology aeronautics professor Nick Roy already has been working on Project Wing for two years, according to Google. The Mountain View, California, company didn't disclose how much the project has cost.
Drones clearly could help Google expand an existing service that delivers goods purchased online on the day that they were ordered. Google so far is offering the same-day delivery service by automobiles in parts of the San Francisco Bay Area, Los Angeles and New York.
"Self-flying vehicles could open up entirely new approaches to moving goods, including options that are cheaper, faster, less wasteful and more environmentally sensitive than what's possible today," Google said in a pamphlet outlining Project Wing.
Google, though, seems to see its drones as something more than another step in e-commerce delivery. The aerial vehicles also could make it easier for people to share certain items, such as a power drill, that they may only need periodically and carry emergency supplies to areas damaged by earthquakes, hurricanes and other natural catastrophes, according to Google's Project Wing pamphlet.
-AP

Related Content


Monday, August 25, 2014

Autonomous bots like self-driving cars don’t see the world like us


index_iCub_SPL_crop.jpg
(SPL)
Autonomous bots like self-driving cars don’t see the world like us. Frank Swain discovers why this could be a problem.
Can you tell the difference between a human and a soda can? For most of us, distinguishing an average-sized adult from a five-inch-high aluminium can isn’t a difficult task. But to an autonomous robot, they can both look the same. Confused? So are the robots.
Last month, the UK government announced that self-driving cars would hit the roads by 2015, following in the footsteps of Nevada and California. Soon autonomous robots of all shapes and sizes – from cars to hospital helpers – will be a familiar sight in public. But in order for that to happen, the machines need to learn to navigate our environment, and that requires a lot more than a good pair of eyes.
(Thinkstock)
Robots like self-driving cars don’t only come equipped with video cameras for seeing what we can see. They can also have ultrasound – already widely used in parking sensors – as well as radar, sonar, laser, and infra red. These machines are constantly sending out flashes of invisible light and sound, and carefully studying the reflections to see their surroundings – such as pedestrians, cyclists and other motorists. You’d think that would be enough to get a comprehensive view, but there’s a big difference between seeing the world, and understanding it.
How a robot car sees the world around it
Which brings us back to the confusion between cans and pedestrians. When an autonomous car scans a person with its forward-facing radar, they show the same reflectivity as a soda can, explains Sven Beiker, executive director of the Center for Automotive Research at Stanford University. “That tells you that radar is not the best instrument to detect people. The laser or especially camera are more suited to do that.”
The trouble is the more eyes you add, the more and more abstract data points it adds for the robot’s brain to organise into a coherent picture of the world.
Just an illusion
We take for granted what goes into creating our own view of the road. We tend to think of the world falling onto retinas like the picture through a camera lens, but sight is much more complicated. “The whole visual system shreds images, breaks them up into maps of colour, maps of motion, and so on, and somehow then manages to reintegrate that,” explains Peter McOwan, a professor of computer science at Queen Mary, University of London. How the brain performs this trick is still a mystery, but it’s one he’s trying to replicate in robot brains by studying what happens when we have glitches in our own vision. 
There are some images that our brains consistently put together incorrectly, and these are what we call optical illusions. McOwan is interested in optical illusions because if his mathematical models of vision can predict new ones, it's a useful indicator that the model is reflecting human vision accurately. “Optical illusions are intrinsically fascinating magic tricks from nature but at the same time they are also a way to test how good your model is,” he says.
Most robots, for example, would not be fooled by the Adelson checkerboard illusion where we think two identical grey squares are different shades:
The squares marked A and B are the same shade of grey (Wikipedia)
“Humans looking at this illusion process the image and remove the effect of the shadow, which is why we end up seeing the squares as different shades of grey,” explains McOwan.
Although it might seem like the machine wins this round, robots have problems recognising shadows and accounting for the way they change the landscape. “Computer vision suffers really badly when there are variations in lighting conditions, occlusions and shadows,” says McOwan. “Shadows are very often considered to be real objects.”
Hijack alert
This is why autonomous vehicles need more than a pair of suitably advanced cameras. Radar and laser scanners are necessary because machine intelligences need much more information to recognise an object than we do. It’s not just places and objects that robots need to recognise. To be faithful assistants and useful workers, they need to recognise people and our intentions. Military robots need to correctly distinguish enemy soldiers from frightened civilians, and care robots need to recognise not just people but their emotions – even if (perhaps especially if) we’re trying to disguise them. All of these are pattern-recognition problems.
The contextual awareness needed to safely navigate the world is not to be taken lightly. Beiker gives the example of a plastic ball rolling into the road. Most human drivers would expect that a child might follow it, and slow down accordingly. A robot can too, but distinguishing between a ball and a plastic bag is difficult, even with all of their sensors and algorithms. And that’s before we start thinking about people who might set out to intentionally distract or confuse a robot, tricking it into driving onto the pavement or falling down a staircase. Could a robot recognise a fake road diversion that might be a prelude to a theft or a hijacking?
Military bots are currently guided by humans, but autonomy may grow in the future (SPL)
McOwan isn’t overly worried by the prospect of criminals sabotaging autonomous machines. “It’s more important that a robot acts predictably to the environment and social norms rather than correctly,” he says. “It’s all about what you would do, and what you would expect a robot to do. At the end of the day if you step into a self-driving car, you are at the mercy of the systems surrounding you.”
No technology on Earth is 100% safe, says Beiker, but he questions the focus on making sure everything works safely, rather than focus on what makes it work. “I found it amazing how much time the automotive industry spends on things they don’t want to happen compared to the time they spend on things they do want to happen,” he says.
He admits that for the foreseeable future we have to have a human monitoring the system. “It’s not realistic to say any time soon computers will take over and make all decisions on behalf of the driver. We’re not there yet.”  
If you would like to comment on this, or anything else you have seen on Future, head over to our Facebook or Google+page, or message us on Twitter.

Tuesday, August 12, 2014

Artificial music: The computers that create melodies

Artificial music: The computers that create melodies


Can computers compose beautiful, emotional music? Phil Ball discovers a new algorithmic composer challenging our ideas of what music itself should be.
When Peter Russell first heard the unusual music, he was pleasantly surprised. It was a “delightful piece of chamber music”, he wrote, reminiscent of French pieces written in the early 20th Century. “After repeated hearings, I came to like it.”
What Russell, a musicologist, didn’t know was that the score titled Hello World had actually been composed much more recently by a computer called Iamus. Other listeners in blind tests have been similarly fooled. (Why not listen to it yourself as you read this article?)
Iamus is the creation of computer scientist Francisco Vico and his collaborators at the University of Malaga in Spain. It also has a younger sibling, called Melomics109, which composes ‘popular’ music.
You might think that any serious composers would turn up their noses at music made by a computer algorithm. But a few are already taking Iamus’s ideas very seriously. In 2012, a CD showcasing Iamus’s compositions featured performances by some of the world’s top musicians, including the London Symphony Orchestra. One of the other musicians to appear on the recording was Gustavo Diaz-Jerez, a composer and concert pianist at the Centro Superior de Música del País Vasco, in Spain, who is even using Iamus to write an opera that premieres next year.
No previous attempts to make music by computer – and there have been many, dating back to the early days of computation – have been afforded such serious attention.
A track by Melonics109
Even a cursory listen to Iamus is likely to persuade sceptics that it has come a long way from earlier efforts at computer-composed music such as “Emily Howell”, a program devised by American music professor David Cope. The key to Iamus’s success is an algorithm that mimics the process of natural selection. It takes a fragment of music (itself generated at random), of any length, and mutates it. Each mutation is assessed to see whether it conforms to particular rules – some generic, such as that the notes have to be playable on the instrument in question, others genre-specific, so that features like the melodies and harmonies fit with what is typical for that style. Little by little, the initial random fragment becomes more and more like real music, and the ‘evolutionary process’ stops when all the rules are met. In this way, hundreds of variants can be generated from the same starting material.
In a sense, these algorithms are not doing anything so very different from the way composers have always composed. Composing fugues, for example – which has been done from the Baroque to the modern era – involves taking a small melodic idea and applying permutations and rules that extend, develop and interweave it in overlapping voices, while preserving some basic and essential rules of harmony. Forms such as sonatas and concertos were also structured by clear rules.
Alphard, by Iamus, on the clarinet
Iamus’s key works so far won’t be to everyone’s taste. They are in the atonal modernist style that many people find austere and forbidding – think Birtwistle and Berio, not Brahms and Beethoven. But Diaz-Jerez finds them much richer and more satisfying than, say, the experiments of modernists in the 1950s and 60s using the technique of “total serialism”, which results in almost random choices of pitch, rhythm and other musical parameters. Some listeners might feel that, until Iamus can show itself capable of producing more familiar kinds of melody to compare with those of Mozart, its real potential as a musical maestro remains to be decided.
Others, such as the music critic Tom Service, have suggested that Iamus’s creators are making a mistake by programming it to generate music like that of human composers, using the same repertoire of traditional orchestral sounds, rather than seeing whether it can produce music that is more genuinely novel.
But Vico insists that these are early days. The possibility of generating new forms of music, perhaps by blending the rules of existing genres, is one of the prospects that excites him most.
Ugadi, by Iamus, on violin
Iamus’s ultimate value, however, might not be so much as a composer in its own right but as a factory of musical ideas, which human composers can mine for inspiration.
“In the future I think there will be two kinds of composer”, says Díaz-Jerez. “There will be those who admit to using the [Iamus] repository and those who don’t.”
You might be tempted to call this cheating – taking the output of a computer and calling it your own. But Diaz-Jerez argues that this would invoke a false idea of how music has always been composed. As well as using the aforementioned rules rather than some free flow of arbitrary ideas, composers have long borrowed ideas and fragments from one another. Several of J. S. Bach’s fugues in The Well-Tempered Clavier, often regarded as the epitome of his genius, use themes taken from earlier works. The other arts are no different either, with their constant mimicking of styles and themes.
Kinoth, by Iamus, on violin and piano
So delving into Iamus’s repository would be simply a routine extension of old practices. The truth is, Diaz-Jerez says, that composition is less the act of divine inspiration it is commonly perceived to be, and more of a methodical craft. Like all crafts, it often has to be produced to a deadline, and Diaz-Jerez says that the ready-mades offered by Iamus speed up the process considerably. One doesn’t have to use them intact, and he generally makes changes to effect improvements – a superfluous voice deleted here, a note altered there. But he finds that Iamus can already create compositions with great invention and complexity.
In fact Diaz-Jerez often finds himself personifying the software. Describing some of the riches at a recent meeting in Malaga, called Zero music: Music after the advent of the computer-composer, he was constantly (and tellingly) correcting himself: “What he – I mean, it – has done here is…”
And the fact that Iamus fools many listeners into believing its music is human means Iamus passes the musical equivalent of the “Turing test”, devised by computer-theory pioneer Alan Turing as a criterion for assessing whether a machine shows artificial intelligence: if you can’t tell the difference between the responses of a computer and those of a human, there’s no logical reason to deny it ‘intelligence’.
If you would like to comment on this, or anything else you have seen on Future, head over to our Facebook or Google+ page, or message us on Twitter.


Related stories