Monthly Archives: March 2017

Computer that reads body language

Researchers at Carnegie Mellon University’s Robotics Institute have enabled a computer to understand the body poses and movements of multiple people from video in real time — including, for the first time, the pose of each individual’s fingers.

This new method was developed with the help of the Panoptic Studio, a two-story dome embedded with 500 video cameras. The insights gained from experiments in that facility now make it possible to detect the pose of a group of people using a single camera and a laptop computer.

Yaser Sheikh, associate professor of robotics, said these methods for tracking 2-D human form and motion open up new ways for people and machines to interact with each other, and for people to use machines to better understand the world around them. The ability to recognize hand poses, for instance, will make it possible for people to interact with computers in new and more natural ways, such as communicating with computers simply by pointing at things.

Detecting the nuances of nonverbal communication between individuals will allow robots to serve in social spaces, allowing robots to perceive what people around them are doing, what moods they are in and whether they can be interrupted. A self-driving car could get an early warning that a pedestrian is about to step into the street by monitoring body language. Enabling machines to understand human behavior also could enable new approaches to behavioral diagnosis and rehabilitation for conditions such as autism, dyslexia and depression.

“We communicate almost as much with the movement of our bodies as we do with our voice,” Sheikh said. “But computers are more or less blind to it.”

In sports analytics, real-time pose detection will make it possible for computers not only to track the position of each player on the field of play, as is now the case, but to also know what players are doing with their arms, legs and heads at each point in time. The methods can be used for live events or applied to existing videos.

To encourage more research and applications, the researchers have released their computer code for both multiperson and hand-pose estimation. It already is being widely used by research groups, and more than 20 commercial groups, including automotive companies, have expressed interest in licensing the technology, Sheikh said.

Sheikh and his colleagues will present reports on their multiperson and hand-pose detection methods at CVPR 2017, the Computer Vision and Pattern Recognition Conference, July 21-26 in Honolulu.

Tracking multiple people in real time, particularly in social situations where they may be in contact with each other, presents a number of challenges. Simply using programs that track the pose of an individual does not work well when applied to each individual in a group, particularly when that group gets large. Sheikh and his colleagues took a bottom-up approach, which first localizes all the body parts in a scene — arms, legs, faces, etc. — and then associates those parts with particular individuals.

The challenges for hand detection are even greater. As people use their hands to hold objects and make gestures, a camera is unlikely to see all parts of the hand at the same time. Unlike the face and body, large datasets do not exist of hand images that have been laboriously annotated with labels of parts and positions.

But for every image that shows only part of the hand, there often exists another image from a different angle with a full or complementary view of the hand, said Hanbyul Joo, a Ph.D. student in robotics. That’s where the researchers made use of CMU’s multicamera Panoptic Studio.

“A single shot gives you 500 views of a person’s hand, plus it automatically annotates the hand position,” Joo explained. “Hands are too small to be annotated by most of our cameras, however, so for this study we used just 31 high-definition cameras, but still were able to build a massive data set.”

Joo and Tomas Simon, another Ph.D. student, used their hands to generate thousands of views.

“The Panoptic Studio supercharges our research,” Sheikh said. It now is being used to improve body, face and hand detectors by jointly training them. Also, as work progresses to move from the 2-D models of humans to 3-D models, the facility’s ability to automatically generate annotated images will be crucial.

When the Panoptic Studio was built a decade ago with support from the National Science Foundation, it was not clear what impact it would have, Sheikh said.

“Now, we’re able to break through a number of technical barriers primarily as a result of that NSF grant 10 years ago,” he added. “We’re sharing the code, but we’re also sharing all the data captured in the Panoptic Studio.”


Why you might trust a quantum computer with secrets, even over the internet

Here’s the scenario: you have sensitive data and a problem that only a quantum computer can solve. You have no quantum devices yourself. You could buy time on a quantum computer, but you don’t want to give away your secrets. What can you do?

Writing in Physical Review X on 11 July, researchers in Singapore and Australia propose a way you could use a quantum computer securely, even over the internet. The technique could hide both your data and program from the computer itself. Their work counters earlier hints that such a feat is impossible.

The scenario is not far-fetched. Quantum computers promise new routes to solving problems in cryptography, modelling and machine learning, exciting government and industry. Such problems may involve confidential data or be commercially sensitive.

Technology giants are already investing in building such computers — and making them available to users. For example, IBM announced on 17 May this year that it is making a quantum computer with 16 quantum bits accessible to the public for free on the cloud, as well as a 17-qubit prototype commercial processor.

Seventeen qubits are not enough to outperform the world’s current supercomputers, but as quantum computers gain qubits, they are expected to exceed the capabilities of any machine we have today. That should drive demand for access.

“We’re looking at what’s possible if you’re someone just interacting with a quantum computer across the internet from your laptop. We find that it’s possible to hide some interesting computations,” says Joseph Fitzsimons, a Principal Investigator at the Centre for Quantum Technologies (CQT) at the National University of Singapore and Associate Professor at Singapore University of Technology and Design (SUTD), who led the work.

Quantum computers work by processing bits of information stored in quantum states. Unlike the binary bits found in our regular (i.e., classical) computers, each a 0 or 1, qubits can be in superpositions of 0 and 1. The qubits can also be entangled, which is believed to be crucial to a quantum computer’s power.

The scheme designed by Fitzsimons and his colleagues brings secrecy to a form of quantum computing driven by measurements.

In this scheme, the quantum computer is prepared by putting all its qubits into a special type of entangled state. Then the computation is carried out by measuring the qubits one by one. The user provides step-wise instructions for each measurement: the steps encode both the input data and the program.

Researchers have shown previously that users who can make or measure qubits to convey instructions to the quantum computer could disguise their computation. The new paper extends that power to users who can only send classical bits — i.e. most of us, for now.

This is surprising because some computer science theorems imply that encrypted quantum computation is impossible when only classical communication is available.

The hope for security comes from the quantum computer not knowing which steps of the measurement sequence do what. The quantum computer can’t tell which qubits were used for inputs, which for operations and which for outputs.

“It’s extremely exciting. You can use this unique feature of the measurement-based model of quantum computing — the way information flows through the state — as a crypto tool to hide information from the server,” says team member Tommaso Demarie of CQT and SUTD.

Although the owner of the quantum computer could try to reverse engineer the sequence of measurements performed, ambiguity about the role of each step leads to many possible interpretations of what calculation was done. The true calculation is hidden among the many, like a needle in a haystack.

The set of interpretations grows rapidly with the number of qubits. “The set of all possible computations is exponentially large — that’s one of the things we prove in the paper — and therefore the chance of guessing the real computation is exponentially small,” says Fitzsimons. One question remains: could meaningful computations be so rare among all the possible ones that the guessing gets easier? That’s what the researchers need to check next.

Nicolas Menicucci at the Centre for Quantum Computation and Communication Technology at RMIT University in Melbourne, Australia, and Atul Mantri at SUTD, are coauthors on the work.

“Quantum computers became famous in the ’90s with the discovery that they could break some classical cryptography schemes — but maybe quantum computing will instead be known for making the future of cloud computing secure,” says Mantri.

New material resembling a metal nanosponge could reduce computer energy consumption

In order to store information in the conventional magnetic memories of electronic devices, the materials’ small magnetic domains work by pointing up or down according to the magnetic fields. To generate these fields it is necessary to produce electric currents, but these currents heat up materials and a large amount of energy is spent cooling them. Practically 40% of the electrical energy going into computers (or “Big Data” servers) dissipates as heat.

In 2007, French scientists observed that when the magnetic materials are put into ultra-thin layers and voltage is applied, the amount of current and energy needed to point the magnetic domains was reduced by 4%. However, this slight reduction was not significant enough to be applied to devices.

A research team directed by Jordi Sort, ICREA researcher and lecturer of the Department of Physics at the Universitat Autònoma de Barcelona, with the collaboration of the Catalan Institute for Nanoscience and Nanotechnology (ICN2), has searched for a solution based on the magnetic properties of a new nanoporous material which could increase this surface. The new material, which is featured this week in the Advanced Functional Materialsjournal, consists in nanoporous copper and nickel alloy films, organised in a way that the interior forms surfaces and holes similar to that of the inside of a sponge, but with a separation between pores of only 5 or 10 nanometres. In other words, the walls of the pores contain enough room for only a few dozen atoms.

“There are many researchers applying nanoporous materials to improve physical-chemical processes, such as in the development of new sensors, but we studied what these materials could provide to electromagnetism,” Jordi Sort explains. “The nanopores found on the inside of nanoporous materials offer a great amount of surface. With this vast surface concentrated in a very small space we can apply the voltage of a battery and enormously reduce the energy needed to orientate the magnetic domains and record data. This represents a new paradigm in the energy saving of computers and in computing and handling magnetic data in general,” says Jordi Sort.

UAB researchers have built the first prototypes of nanoporous magnetic memories based on copper and nickel alloys (CuNi) and have reached very satisfactory results, with a reduction of 35% in magnetic coercivity, a magnitude related to the energy consumption needed to reorientate the magnetic domains and record data.

In these first prototypes, researchers applied the voltage using liquid electrolytes, but are now working on solid materials which could help implement the devices in the market. According to Jordi Sort, “Implementing this material into the memories of computers and mobile devices can offer many advantages, mainly in direct energy saving for computers and considerable increase in the autonomy of mobile devices.”

The development of new nanoelectronic devices with improved energy efficiency is one of the strategic lines included in the European Union’s Horizon 2020 programme. According to some estimations, if electric current is completely substituted by voltage in data processing systems, energy costs can be reduced by a factor of 1/500. In fact, computer servers of large companies such as Google and Facebook are located underwater, or in Nordic countries in which temperatures are very low, with the aim of reducing heating and energy consumption.

Living computers: RNA circuits transform cells into nanodevices

The interdisciplinary nexus of biology and engineering, known as synthetic biology, is growing at a rapid pace, opening new vistas that could scarcely be imagined a short time ago.

In new research, Alex Green, a professor at ASU’s Biodesign Institute, demonstrates how living cells can be induced to carry out computations in the manner of tiny robots or computers.

The results of the new study have significant implications for intelligent drug design and smart drug delivery, green energy production, low-cost diagnostic technologies and even the development of futuristic nanomachines capable of hunting down cancer cells or switching off aberrant genes.

“We’re using very predictable and programmable RNA-RNA interactions to define what these circuits can do,” says Green. “That means we can use computer software to design RNA sequences that behave the way we want them to in a cell. It makes the design process a lot faster.”

The study appears in the advance online edition of the journal Nature.

Designer RNA

The approach described uses circuits composed of ribonucleic acid or RNA. These circuit designs, which resemble conventional electronic circuits, self-assemble in bacterial cells, allowing them to sense incoming messages and respond to them by producing a particular computational output, (in this case, a protein).

In the new study, specialized circuits known as logic gates were designed in the lab, then incorporated into living cells. The tiny circuit switches are tripped when messages (in the form of RNA fragments) attach themselves to their complementary RNA sequences in the cellular circuit, activating the logic gate and producing a desired output.

The RNA switches can be combined in various ways to produce more complex logic gates capable of evaluating and responding to multiple inputs, just as a simple computer may take several variables and perform sequential operations like addition and subtraction in order to reach a final result.

The new study dramatically improves the ease with which cellular computing may be carried out. The RNA-only approach to producing cellular nanodevices is a significant advance, as earlier efforts required the use of complex intermediaries, like proteins. Now, the necessary ribocomputing parts can be readily designed on computer. The simple base-pairing properties of RNA’s four nucleotide letters (A, C, G and U) ensure the predictable self-assembly and functioning of these parts within a living cell.

Green’s work in this area began at the Wyss Institute at Harvard, where he helped develop the central component used in the cellular circuits, known as an RNA toehold switch. The work was carried out while Green was a post-doc working with nanotechnology expert Peng Yin, along with the synthetic biologists James Collins and Pamela Silver, who are all co-authors on the new paper. “The first experiments were in 2012,” Green says. “Basically, the toehold switches performed so well that we wanted to find a way to best exploit them for cellular applications.”

After arriving at ASU, Green’s first grad student Duo Ma worked on experiments at the Biodesign Institute, while another postdoc, Jongmin Kim continued similar work at the Wyss Institute. Both are also co-authors of the new study.

Nature’s Pentium chip

The possibility of using DNA and RNA, the molecules of life, to perform computer-like computations was first demonstrated in 1994 by Leonard Adleman of the University of Southern California. Since then, rapid progress has advanced the field considerably, and recently, such molecular computing has been accomplished within living cells. (Bacterial cells are usually employed for this purpose as they are simpler and easier to manipulate.)

The technique described in the new paper takes advantage of the fact that RNA, unlike DNA, is single stranded when it is produced in cells. This allows researchers to design RNA circuits that can be activated when a complementary RNA strand binds with an exposed RNA sequence in the designed circuit. This binding of complementary strands is regular and predictable, with A nucleotides always pairing with U and C always pairing with G.

With all the processing elements of the circuit made using RNA, which can take on an astronomical number of potential sequences, the real power of the newly described method lies in its ability to perform many operations at the same time. This capacity for parallel processing permits faster and more sophisticated computation while making efficient use of the limited resources of the cell.

Logical results

In the new study, logic gates known as AND, OR and NOT were designed. An AND gate produces an output in the cell only when two RNA messages A AND B are present. An OR gate responds to either A OR B, while a NOT gate will block output if a given RNA input is present. Combining these gates can produce complex logic capable of responding to multiple inputs.

Using RNA toehold switches, the researchers produced the first ribocomputing devices capable of four-input AND, six-input OR and a 12-input device able to carry out a complex combination of AND, OR and NOT logic known as disjunctive normal form expression. When the logic gate encounters the correct RNA binding sequences leading to activation, a toehold switch opens and the process of translation to protein takes place. All of these circuit-sensing and output functions can be integrated in the same molecule, making the systems compact and easier to implement in a cell.

The research represents the next phase of ongoing work using the highly versatile RNA toehold switches. In earlier work, Green and his colleagues demonstrated that an inexpensive, paper-based array of RNA toehold switches could act as a highly accurate platform for diagnosing the Zika virus. Detection of viral RNA by the array activated the toehold switches, triggering production of a protein, which registered as a color change on the array.

The basic principle of using RNA-based devices to regulate protein production can be applied to virtually any RNA input, ushering in a new generation of accurate, low-cost diagnostics for a broad range of diseases. The cell-free approach is particularly well suited for emerging threats and during disease outbreaks in the developing world, where medical resources and personnel may be limited.

The computer within

According to Green, the next stage of research will focus on the use of the RNA toehold technology to produce so-called neural networks within living cells — circuits capable of analyzing a range of excitatory and inhibitory inputs, averaging them and producing an output once a particular threshold of activity is reached, much the way a neuron averages incoming signals from other neurons. Ultimately, researchers hope to induce cells to communicate with one another via programmable molecular signals, forming a truly interactive, brain-like network.

“Because we’re using RNA, a universal molecule of life, we know these interactions can also work in other cells, so our method provides a general strategy that could be ported to other organisms,” Green says, alluding to a future in which human cells become fully programmable entities with extensive biological capabilities.

The accompanying video demonstrates the basic principles of the RNA toehold switch.