How to Make Things Float With Ultrasound - IEEE Spectrum

2022-05-07 00:52:59 By : Mr. Tony Cai

IEEE websites place cookies on your device to give you the best user experience. By using our websites, you agree to the placement of these cookies. To learn more, read our Privacy Policy.

Magicians have long made things appear to hover without any visible means of support. For some reason, engineers delight in trying to turn this particular illusion into reality, and we’re no exception at IEEE Spectrum. Back in 2014, for example, W. Wayt Gibbs wrote for us about how to make a miniature disco ball levitate using the power of electromagnetism. But that system works only for objects that can have a magnet attached. So, when I saw a kit promising to make any kind of small object float, even drops of liquid, I knew I had to have it.

The US $70 kit is from Makerfabs and is based on the TinyLev design created by Asier Marzo, Adrian Barnes, and Bruce W. Drinkwater as published in last August’s Review of Scientific Instruments. (Their goal was to create an inexpensive way to examine materials using techniques like spectroscopy without worrying about contamination from a container. My goal is to be able to make something float while cackling, “Behold!”)

The basic operating principle is to set up an acoustic standing wave. Just as with a vibrating guitar string, such a standing wave will have nodes, which are spatially fixed points where the sound’s vibrations are at a maximum. A small object placed at one of the nodes will be held in position, thanks to the momentum imparted by vibrating air molecules.

The standing wave is formed by two opposing concave arrays of ultrasonic transducers, with 36 transducers per array. Sound waves emanating from the arrays interfere with one another to produce the standing wave.

Technically, you can create a standing wave using just two transducers pointed directly at each other, but using the arrays has two advantages. First, they deliver a lot more power, allowing for heavier objects to be held aloft and a wider separation between the opposing transducers. And second, the arrays’ curved shape focuses power toward the central axis of the levitator, providing lateral forces that keep levitating objects from drifting sideways out of the node.

Strength in Numbers: I mounted 72 ultrasonic transducers in a 3D-printed frame (top) and then wired them so that they are controlled in sync (middle). The transducers are connected to a motor driver board that is controlled by an Arduino Nano (bottom). Photos: Stephen Cass

The TinyLev can lift light objects with a maximum length of about 4 millimeters. The creators have even demonstrated the system with an ant. Their published paper has everything you need if you want to build the TinyLev completely from scratch, including files for 3D-printing the frame that holds the arrays, but you can save yourself a lot of trouble by buying the Makerfabs kit. In particular, the frame needs to be printed at a fairly high resolution or getting the transducers to fit into their shallow sockets may require a lot of finishing work. In addition to the transducers and the frame, the other major components are an Arduino Nano and a motor-driver breakout board. The motor driver provides the current to power the arrays. The Nano controls the driver by providing 40-kilohertz square-wave timing signals. The Nano can also be used to alter the phase difference between the two arrays, which lets you tweak the vertical position of floating samples.

Makerfabs’ online instructions are taken from the original scientific paper’s supplementary material and include some video. Ideally, I’d like to see Makerfabs reformulate the instructions for its website. As it is, I completely missed wiring one jumper because it was cropped out by the border of one diagram. It wasn’t until later, when I noticed the diagram could be enlarged, that I spotted the missing connection.

But the biggest mistake I made was all my own. Each transducer in an array has to emit a sound wave that is in phase with the rest. The instructions offer two different ways to determine the internal polarity of the transducers, so that they generate waves that are correctly phased when the driving signal is fed to the array. Looking at the transducers, I noted each one had a little “+” symbol by one leg and thus assumed that the transducers used in the instructions must be ones that didn’t have their polarity marked in this way. So, I wired in the transducers with regard to this symbol and hooked up the control circuitry, a straightforward if somewhat lengthy process that also entails downloading and installing control software into the Nano.

But when I turned it on, I realized that no power of levitation was present. Following the troubleshooting section of the instructions, I wired up two extra transducers to an oscilloscope. Using one transducer as a reference and the other as a probe, I measured the phase of the transducers in each array (just as a small speaker can be used as a microphone in a pinch, the transducers can convert ultrasound into a voltage). About a third of my sensors were out of phase, so I had to desolder, flip, and solder them back in.

Use the Force: Momentum imparted by interfering ultrasonic waves can hold small objects aloft against the force of gravity, such as this drop of colored water. Photo: Randi Klett

Once that was done, I tested the array again and, with a bit of trial and error, managed to successfully float a small chip of wood. I built a wooden case to support the TinyLev’s frame and hold the control electronics.

I then tried to levitate everything in reach that was about the right size and weight. Finding the invisible nodes proved tricky. I got my best results from using clumps of sawdust loosely held with tweezers, as it’s easy to see the sawdust twitch when it’s being vibrated. Tiny amounts of sawdust can also help to mark the locations of the nodes when you are trying to levitate something more substantial.

It’s really pretty impressive to see multiple objects levitating at once, each in its own node. And once you do get something to float, the system can be surprisingly stable: I have had objects levitating for several hours, which allows me to indeed cackle, “Behold!” whenever someone comes into my office, without any embarrassing utterances of “Wait, wait, just let me try it one more time....”

This article appears in the May 2018 print issue as “Acoustic Levitation.”

Lossless data compression seems a bit like a magic trick. Its cousin, lossy compression, is easier to comprehend. Lossy algorithms are used to get music into the popular MP3 format and turn a digital image into a standard JPEG file. They do this by selectively removing bits, taking what scientists know about the way we see and hear to determine which bits we'd least miss. But no one can make the case that the resulting file is a perfect replica of the original.

Not so with lossless data compression. Bits do disappear, making the data file dramatically smaller and thus easier to store and transmit. The important difference is that the bits reappear on command. It's as if the bits are rabbits in a magician's act, disappearing and then reappearing from inside a hat at the wave of a wand.

The world of magic had Houdini, who pioneered tricks that are still performed today. And data compression has Jacob Ziv.

In 1977, Ziv, working with Abraham Lempel, published the equivalent of Houdini on Magic: a paper in the IEEE Transactions on Information Theory titled “A Universal Algorithm for Sequential Data Compression." The algorithm described in the paper came to be called LZ77—from the authors' names, in alphabetical order, and the year. LZ77 wasn't the first lossless compression algorithm, but it was the first that could work its magic in a single step.

Current job: Technion Distinguished Professor Emeritus, Faculty of Electrical Engineering

Birthplace: Tiberias, British-ruled Palestine (now Israel)

Family: Married to Shoshana, four children, nine grandchildren

Education: BSc, Dip-Eng, MSc, all in electrical engineering from Technion, in 1954, 1955, 1957; Sc.D, MIT, 1962

Favorite books: Detective stories, particularly those featuring Perry Mason

Favorite kind of music: classical, particularly Bach; jazz

Favorite food: Falafel, ice cream

How he starts the day: A cup of espresso and a piece of dark chocolate

Organizational memberships: Israel Academy of Science and Humanities, U.S. National Academy of Engineering, U.S. National Academy of Sciences, American Philosophical Society, IEEE Fellow

Major awards: IEEE Medal of Honor “for fundamental contributions to information theory and data compression technology, and for distinguished research leadership"; BBVA Foundation Frontiers of Knowledge Award; Claude E. Shannon Award of the IEEE Information Theory Society

The following year, the two researchers issued a refinement, LZ78. That algorithm became the basis for the Unix compress program used in the early '80s; WinZip and Gzip, born in the early '90s; and the GIF and TIFF image formats. Without these algorithms, we'd likely be mailing large data files on discs instead of sending them across the Internet with a click, buying our music on CDs instead of streaming it, and looking at Facebook feeds that don't have bouncing animated images.

Ziv went on to partner with other researchers on other innovations in compression. It is his full body of work, spanning more than half a century, that earned him the 2021 IEEE Medal of Honor “for fundamental contributions to information theory and data compression technology, and for distinguished research leadership."

Ziv was born in 1931 to Russian immigrants in Tiberias, a city then in British-ruled Palestine and now part of Israel. Electricity and gadgets—and little else—fascinated him as a child. While practicing violin, for example, he came up with a scheme to turn his music stand into a lamp. He also tried to build a Marconi transmitter from metal player-piano parts. When he plugged the contraption in, the entire house went dark. He never did get that transmitter to work.

When the Arab-Israeli War began in 1948, Ziv was in high school. Drafted into the Israel Defense Forces, he served briefly on the front lines until a group of mothers held organized protests, demanding that the youngest soldiers be sent elsewhere. Ziv's reassignment took him to the Israeli Air Force, where he trained as a radar technician. When the war ended, he entered Technion—Israel Institute of Technology to study electrical engineering.

After completing his master's degree in 1955, Ziv returned to the defense world, this time joining Israel's National Defense Research Laboratory (now Rafael Advanced Defense Systems) to develop electronic components for use in missiles and other military systems. The trouble was, Ziv recalls, that none of the engineers in the group, including himself, had more than a basic understanding of electronics. Their electrical engineering education had focused more on power systems.

“We had about six people, and we had to teach ourselves," he says. “We would pick a book and then study together, like religious Jews studying the Hebrew Bible. It wasn't enough."

The group's goal was to build a telemetry system using transistors instead of vacuum tubes. They needed not only knowledge, but parts. Ziv contacted Bell Telephone Laboratories and requested a free sample of its transistor; the company sent 100.

“That covered our needs for a few months," he says. “I give myself credit for being the first one in Israel to do something serious with the transistor."

In 1959, Ziv was selected as one of a handful of researchers from Israel's defense lab to study abroad. That program, he says, transformed the evolution of science in Israel. Its organizers didn't steer the selected young engineers and scientists into particular fields. Instead, they let them pursue any type of graduate studies in any Western nation.

Ziv planned to continue working in communications, but he was no longer interested in just the hardware. He had recently read Information Theory (Prentice-Hall, 1953), one of the earliest books on the subject, by Stanford Goldman, and he decided to make information theory his focus. And where else would one study information theory but MIT, where Claude Shannon, the field's pioneer, had started out?

Ziv arrived in Cambridge, Mass., in 1960. His Ph.D. research involved a method of determining how to encode and decode messages sent through a noisy channel, minimizing the probability and error while at the same time keeping the decoding simple.

“Information theory is beautiful," he says. “It tells you what is the best that you can ever achieve, and [it] tells you how to approximate the outcome. So if you invest the computational effort, you can know you are approaching the best outcome possible."

Ziv contrasts that certainty with the uncertainty of a deep-learning algorithm. It may be clear that the algorithm is working, but nobody really knows whether it is the best result possible.

While at MIT, Ziv held a part-time job at U.S. defense contractor Melpar, where he worked on error-correcting software. He found this work less beautiful. “In order to run a computer program at the time, you had to use punch cards," he recalls. “And I hated them. That is why I didn't go into real computer science."

Back at the Defense Research Laboratory after two years in the United States, Ziv took charge of the Communications Department. Then in 1970, with several other coworkers, he joined the faculty of Technion.

Jacob Ziv (with glasses), who became chair of Technion's electrical engineering department in the 1970s, worked earlier on information theory with Moshe Zakai. The two collaborated on a paper describing what became known as the Ziv-Zakai bound. Photo: Jacob Ziv/Technion

There he met Abraham Lempel. The two discussed trying to improve lossless data compression.

The state of the art in lossless data compression at the time was Huffman coding. This approach starts by finding sequences of bits in a data file and then sorting them by the frequency with which they appear. Then the encoder builds a dictionary in which the most common sequences are represented by the smallest number of bits. This is the same idea behind Morse code: The most frequent letter in the English language, e, is represented by a single dot, while rarer letters have more complex combinations of dots and dashes.

Huffman coding, while still used today in the MPEG-2 compression format and a lossless form of JPEG, has its drawbacks. It requires two passes through a data file: one to calculate the statistical features of the file, and the second to encode the data. And storing the dictionary along with the encoded data adds to the size of the compressed file.

Ziv and Lempel wondered if they could develop a lossless data-compression algorithm that would work on any kind of data, did not require preprocessing, and would achieve the best compression for that data, a target defined by something known as the Shannon entropy. It was unclear if their goal was even possible. They decided to find out.

Ziv says he and Lempel were the “perfect match" to tackle this question. “I knew all about information theory and statistics, and Abraham was well equipped in Boolean algebra and computer science."

The two came up with the idea of having the algorithm look for unique sequences of bits at the same time that it's compressing the data, using pointers to refer to previously seen sequences. This approach requires only one pass through the file, so it's faster than Huffman coding.

Ziv explains it this way: “You look at incoming bits to find the longest stretch of bits for which there is a match in the past. Let's say that first incoming bit is a 1. Now, since you have only one bit, you have never seen it in the past, so you have no choice but to transmit it as is."

“But then you get another bit," he continues. “Say that's a 1 as well. So you enter into your dictionary 1-1. Say the next bit is a 0. So in your dictionary you now have 1-1 and also 1-0."

Here's where the pointer comes in. The next time that the stream of bits includes a 1-1 or a 1-0, the software doesn't transmit those bits. Instead it sends a pointer to the location where that sequence first appeared, along with the length of the matched sequence. The number of bits that you need for that pointer is very small.

“It's basically what they used to do in publishing TV Guide," Ziv says. “They would run a synopsis of each program once. If the program appeared more than once, they didn't republish the synopsis. They just said, go back to page x."

Decoding in this way is even simpler, because the decoder doesn't have to identify unique sequences. Instead it finds the locations of the sequences by following the pointers and then replaces each pointer with a copy of the relevant sequence.

The algorithm did everything Ziv and Lempel had set out to do—it proved that universally optimum lossless compression without preprocessing was possible.

“At the time they published their work, the fact that the algorithm was crisp and elegant and was easily implementable with low computational complexity was almost beside the point," says Tsachy Weissman, an electrical engineering professor at Stanford University who specializes in information theory. “It was more about the theoretical result."

Eventually, though, researchers recognized the algorithm's practical implications, Weissman says. “The algorithm itself became really useful when our technologies started dealing with larger file sizes beyond 100,000 or even a million characters."

“Their story is a story about the power of fundamental theoretical research," Weissman adds. “You can establish theoretical results about what should be achievable—and decades later humanity benefits from the implementation of algorithms based on those results."

Ziv and Lempel kept working on the technology, trying to get closer to entropy for small data files. That work led to LZ78. Ziv says LZ78 seems similar to LZ77 but is actually very different, because it anticipates the next bit. “Let's say the first bit is a 1, so you enter in the dictionary two codes, 1-1 and 1-0," he explains. You can imagine these two sequences as the first branches of a tree."

“When the second bit comes," Ziv says, “if it's a 1, you send the pointer to the first code, the 1-1, and if it's 0, you point to the other code, 1-0. And then you extend the dictionary by adding two more possibilities to the selected branch of the tree. As you do that repeatedly, sequences that appear more frequently will grow longer branches."

“It turns out," he says, “that not only was that the optimal [approach], but so simple that it became useful right away."

Jacob Ziv (left) and Abraham Lempel published algorithms for lossless data compression in 1977 and 1978, both in the IEEE Transactions on Information Theory. The methods became known as LZ77 and LZ78 and are still in use today. Photo: Jacob Ziv/Technion

While Ziv and Lempel were working on LZ78, they were both on sabbatical from Technion and working at U.S. companies. They knew their development would be commercially useful, and they wanted to patent it.

“I was at Bell Labs," Ziv recalls, “and so I thought the patent should belong to them. But they said that it's not possible to get a patent unless it's a piece of hardware, and they were not interested in trying." (The U.S. Supreme Court didn't open the door to direct patent protection for software until the 1980s.)

However, Lempel's employer, Sperry Rand Corp., was willing to try. It got around the restriction on software patents by building hardware that implemented the algorithm and patenting that device. Sperry Rand followed that first patent with a version adapted by researcher Terry Welch, called the LZW algorithm. It was the LZW variant that spread most widely.

Ziv regrets not being able to patent LZ78 directly, but, he says, “We enjoyed the fact that [LZW] was very popular. It made us famous, and we also enjoyed the research it led us to."

One concept that followed came to be called Lempel-Ziv complexity, a measure of the number of unique substrings contained in a sequence of bits. The fewer unique substrings, the more a sequence can be compressed.

This measure later came to be used to check the security of encryption codes; if a code is truly random, it cannot be compressed. Lempel-Ziv complexity has also been used to analyze electroencephalograms—recordings of electrical activity in the brain—to determine the depth of anesthesia, to diagnose depression, and for other purposes. Researchers have even applied it to analyze pop lyrics, to determine trends in repetitiveness.

Over his career, Ziv published some 100 peer-reviewed papers. While the 1977 and 1978 papers are the most famous, information theorists that came after Ziv have their own favorites.

For Shlomo Shamai, a distinguished professor at Technion, it's the 1976 paper that introduced the Wyner-Ziv algorithm, a way of characterizing the limits of using supplementary information available to the decoder but not the encoder. That problem emerges, for example, in video applications that take advantage of the fact that the decoder has already deciphered the previous frame and thus it can be used as side information for encoding the next one.

For Vincent Poor, a professor of electrical engineering at Princeton University, it's the 1969 paper describing the Ziv-Zakai bound, a way of knowing whether or not a signal processor is getting the most accurate information possible from a given signal.

Ziv also inspired a number of leading data-compression experts through the classes he taught at Technion until 1985. Weissman, a former student, says Ziv “is deeply passionate about the mathematical beauty of compression as a way to quantify information. Taking a course from him in 1999 had a big part in setting me on the path of my own research."

He wasn't the only one so inspired. “I took a class on information theory from Ziv in 1979, at the beginning of my master's studies," says Shamai. “More than 40 years have passed, and I still remember the course. It made me eager to look at these problems, to do research, and to pursue a Ph.D."

In recent years, glaucoma has taken away most of Ziv's vision. He says that a paper published in IEEE Transactions on Information Theory this January is his last. He is 89.

“I started the paper two and a half years ago, when I still had enough vision to use a computer," he says. “At the end, Yuval Cassuto, a younger faculty member at Technion, finished the project." The paper discusses situations in which large information files need to be transmitted quickly to remote databases.

As Ziv explains it, such a need may arise when a doctor wants to compare a patient's DNA sample to past samples from the same patient, to determine if there has been a mutation, or to a library of DNA, to determine if the patient has a genetic disease. Or a researcher studying a new virus may want to compare its DNA sequence to a DNA database of known viruses.

“The problem is that the amount of information in a DNA sample is huge," Ziv says, “too much to be sent by a network today in a matter of hours or even, sometimes, in days. If you are, say, trying to identify viruses that are changing very quickly in time, that may be too long."

The approach he and Cassuto describe involves using known sequences that appear commonly in the database to help compress the new data, without first checking for a specific match between the new data and the known sequences.

“I really hope that this research might be used in the future," Ziv says. If his track record is any indication, Cassuto-Ziv—or perhaps CZ21—will add to his legacy.

This article appears in the May 2021 print issue as “Conjurer of Compression."