Scientists connect quantum bits with sound over record distances – Phys.org

Scientists connect quantum bits with sound over record distances – Phys.org

Scientists connect quantum bits with sound and over record distances
Researchers work on superconducting quantum technology at the Institute for Molecular Engineering. Credit: Nancy Wong

Scientists with the Institute for Molecular Engineering at the University of Chicago have made two breakthroughs in the quest to develop quantum technology. In one study, they entangled two quantum bits using sound for the first time; in another, they built the highest-quality long-range link between two qubits to date. The work brings us closer to harnessing quantum technology to make more powerful computers, ultra-sensitive sensors and secure transmissions.

“Both of these are transformative steps forward to ,” said co-author Andrew Cleland, the John A. MacLean Sr. Professor of Molecular Engineering at the IME and UChicago-affiliated Argonne National Laboratory. A leader in the development of superconducting , he led the team that built the first “ machine,” demonstrating quantum performance in a mechanical resonator. “One of these experiments shows the precision and accuracy we can now achieve, and the other demonstrates a fundamental new ability for these .”

Scientists and engineers see enormous potential in quantum technology, a field that uses the strange properties of the tiniest particles in nature to manipulate and transmit information. For example, under certain conditions, two particles can be “entangled”—their fates linked even when they’re not physically connected. Entangling particles allows you to do all kinds of cool things, like transmit information instantly to space or make unhackable networks.

But the technology has a long way to go—literally: A huge challenge is sending any substantial amount of distance, along cables or fibers.

In a study published April 22 in Nature Physics, Cleland’s lab was able to build a system out of superconducting qubits that exchanged quantum information along a track nearly a meter long with extremely strong fidelity—with far higher performance has been previously demonstrated.

“The coupling was so strong that we can demonstrate a quantum phenomenon called ‘quantum ping-pong’—sending and then catching individual photons as they bounce back,” said Youpeng Zhong, a graduate student in Cleland’s group and the first author of the paper.

Scientists connect quantum bits with sound and over record distances
Postdoctoral researcher Audrey Bienfait (left) and graduate student Youpeng Zhong work in the laboratory of Prof. Andrew Cleland in UChicago’s Institute for Molecular Engineering. Credit: Nancy Wong

One of scientists’ breakthroughs was building the right device to send the signal. The key was shaping the pulses correctly—in an arc shape, like opening and closing a valve slowly, at just the right rate. This method of ‘throttling’ the quantum information helped them achieve such clarity that the system could pass a gold standard measurement of quantum entanglement, called a Bell test. This is a first for superconducting qubits, and it could be useful for building quantum computers as well as for quantum communications.

The other study, published April 26 in Science, shows a way to entangle two superconducting qubits using sound.

A challenge for scientists and engineers as they advance quantum technology is to be able to translate quantum signals from one medium to the other. For example, microwave light is perfect for carrying quantum signals around inside chips. “But you can’t send quantum information through the air in microwaves; the signal just gets swamped,” Cleland said.

The team built a system that could translate the qubits’ microwave language into acoustic sound and have it travel across the chip—using a receiver at the other end that could do the reverse translation.

Scientists connect quantum bits with sound and over record distances
Credit: Nancy Wong

It required some creative engineering: “Microwaves and acoustics are not friends, so we had to separate them onto two and stack those on top of each other,” said Audrey Bienfait, a postdoctoral researcher and first author on the study. “But now that we’ve shown it is possible, it opens some interesting new possibilities for quantum sensors.”



More information:
Y. P. Zhong et al. Violating Bell’s inequality with remotely connected superconducting qubits, Nature Physics (2019). DOI: 10.1038/s41567-019-0507-7

A. Bienfait et al. Phonon-mediated quantum state transfer and remote qubit entanglement, Science (2019). DOI: 10.1126/science.aaw8415

Citation:
Scientists connect quantum bits with sound over record distances (2019, May 1)
retrieved 1 May 2019
from https://phys.org/news/2019-05-scientists-quantum-bits-distances.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.

Read More

Scientists pull speech directly from the brain – TechCrunch

Scientists pull speech directly from the brain – TechCrunch

In a feat that could eventually unlock the possibility of speech for people with severe medical conditions, scientists have successfully recreated the speech of healthy subjects by tapping directly into their brains. The technology is a long, long way from practical application but the science is real and the promise is there.

Edward Chang, neurosurgeon at UC San Francisco and co-author of the paper published today in Nature, explained the impact of the team’s work in a press release: “For the first time, this study demonstrates that we can generate entire spoken sentences based on an individual’s brain activity. This is an exhilarating proof of principle that with technology that is already within reach, we should be able to build a device that is clinically viable in patients with speech loss.”

To be perfectly clear, this isn’t some magic machine that you sit in and its translates your thoughts into speech. It’s a complex and invasive process that decodes not exactly what the subject is thinking but what they were actually speaking.

Led by speech scientist Gopala Anumanchipalli, the experiment involved subjects who had already had large electrode arrays implanted in their brains for a different medical procedure. The researchers had these lucky people read out several hundred sentences aloud while closely recording the signals detected by the electrodes.

The electrode array in question

See, it happens that the researchers know a certain pattern of brain activity that comes after you think of and arrange words (in cortical areas like Wernicke’s and Broca’s) and before the final signals are sent from the motor cortex to your tongue and mouth muscles. There’s a sort of intermediate signal between those that Anumanchipalli and his co-author, grad student Josh Chartier, previously characterized, and which they thought may work for the purposes of reconstructing speech.

Analyzing the audio directly let the team determine which muscles and movements would be involved when (this is pretty established science), and from this they built a sort of virtual model of the person’s vocal system.

They then mapped the brain activity detected during the session to that virtual model using a machine learning system, essentially allowing a recording of a brain to control a recording of a mouth. It’s important to understand that this isn’t turning abstract thoughts into words — it’s understanding the brain’s concrete instructions to the muscles of the face, and determining from those which words those movements would be forming. It’s brain reading, but it isn’t mind reading.

The resulting synthetic speech, while not exactly crystal clear, is certainly intelligible. And set up correctly, it could be capable of outputting 150 words per minute from a person who may otherwise be incapable of speech.

“We still have a ways to go to perfectly mimic spoken language,” said Chartier. “Still, the levels of accuracy we produced here would be an amazing improvement in real-time communication compared to what’s currently available.”

For comparison, a person so afflicted, for instance with a degenerative muscular disease, often has to speak by spelling out words one letter at a time with their gaze. Picture 5-10 words per minute, with other methods for more disabled individuals going even slower. It’s a miracle in a way that they can communicate at all, but this time-consuming and less than natural method is a far cry from the speed and expressiveness of real speech.

If a person was able to use this method, they would be far closer to ordinary speech, though perhaps at the cost of perfect accuracy. But it’s not a magic bullet.

The problem with this method is that it requires a great deal of carefully collected data from what amounts to a healthy speech system, from brain to tip of the tongue. For many people it’s no longer possible to collect this data, and for others the invasive method of collection will make it impossible for a doctor to recommend. And conditions that have prevented a person from ever talking prevent this method from working as well.

The good news is that it’s a start, and there are plenty of conditions it would work for, theoretically. And collecting that critical brain and speech recording data could be done preemptively in cases where a stroke or degeneration is considered a risk.

Read More