Giant array of low-cost telescopes could speed hunt for radio bursts, massive black holes

When the immense Arecibo radio telescope in Puerto Rico collapsed in 2020, it left gaping holes in astronomy. Now, a team from the California Institute of Technology (Caltech) hopes to address some of the gaps with a very different instrument: a tightly packed array of relatively inexpensive radio dishes that aims to quickly image radio sources across wide swaths of the sky. A nearly completed prototype array in California that the team calls a “radio camera” is already locating dozens of the distant, enigmatic eruptions called fast radio bursts (FRBs). Next year, the team hopes to begin construction on a much larger array with 2000 dishes that, together, will match the size of Arecibo.


Maura McLaughlin of West Virginia University is a leader of NANOGrav (the North American Nanohertz Observatory for Gravitational Waves), an effort to search for gravitational waves from supermassive black holes that relied on Arecibo for half its data. She says they took “a big sensitivity hit” when it was lost. “We really need a new telescope with a similar collecting area,” she says, and Caltech’s planned Deep Synoptic Array (DSA) fits that bill. “It will be a game changer.”


To gain sensitivity, radio astronomers can build big dishes like Arecibo or arrays of smaller dishes. But in most such arrays, the dishes are widely spaced, which sharpens their resolution but creates “a data deluge problem,” says Caltech’s Gregg Hallinan, DSA principal investigator (PI). Producing an image from a scattered array is like looking through a fragmented mirror, he says, and recreating the information from the missing parts is a complex nonlinear process known as deconvolution that can take weeks—or even years.


Many astronomers just want to regularly survey the sky for new objects or monitor sources for subtle changes without a heavy processing burden. Caltech’s solution, Hallinan says, is to “fill the mirror up” by packing low-cost dishes together. That makes deconvolution easier and should enable DSA to construct images in real time. The team has nearly finished assembling its prototype, the DSA-110, a T-shaped array of 95 dishes spaced 1 meter apart at Caltech’s Owens Valley Radio Observatory in California plus another 15 “outriggers” spread out more than a kilometer distant. To keep construction costs to $4 million, the instrument uses commercially available 4.6-meter dishes, homemade amplifiers, and wave-channeling feeds fashioned out of cake tins. Most radio telescopes require expensive cryogenic cooling to reduce amplifier noise, but Caltech’s engineers have squeezed similar performance out of room-temperature circuits. Co-PI Vikram Ravi admits they perform less well in the summer heat.


With a wide field of view, DSA-110 is good at detecting FRBs, intense blasts of radio waves lasting only milliseconds, coming from all over the sky. Several thousand have been detected, but little more than a dozen have been traced to their home galaxies, which might hold clues to what is powering the bursts. DSA-110 aims to localize many more. If a burst is detected, data from the outrigger dishes allow the telescope to zoom in and pin the FRB to its galaxy.

Read the full article at:

How do we smell? In a first, scientists created a molecular-level, 3D picture of how an odor molecule activates a human odorant receptor

Breaking a longstanding impasse in our understanding of olfaction, scientists at UC San Francisco (UCSF) have created the first molecular-level, 3D picture of how an odor molecule activates a human odorant receptor, a crucial step in deciphering the sense of smell.


The findings, appearing online March 15, 2023, in the journal Nature, are poised to reignite interest in the science of smell with implications for fragrances, food science, and beyond. Odorant receptors — proteins that bind odor molecules on the surface of olfactory cells — make up half of the largest, most diverse family of receptors in our bodies; A deeper understanding of them paves the way for new insights about a range of biological processes.


“This has been a huge goal in the field for some time,” said Aashish Manglik, MD, PhD, an associate professor of pharmaceutical chemistry and a senior author of the study. The dream, he said, is to map the interactions of thousands of scent molecules with hundreds of odorant receptors, so that a chemist could design a molecule and predict what it would smell like.


“But we haven’t been able to make this map because, without a picture, we don’t know how odor molecules react with their corresponding odor receptors,” Manglik said.


Read the full article at:

Resilient bug-sized robots keep flying even after wing damage

Bumblebees are clumsy fliers. It is estimated that a foraging bee bumps into a flower about once per second, which damages its wings over time. Yet despite having many tiny rips or holes in their wings, bumblebees can still fly.


Aerial robots, on the other hand, are not so resilient. Poke holes in the robot’s wing motors or chop off part of its propellor, and odds are pretty good it will be grounded.


Inspired by the hardiness of bumblebees, MIT researchers have developed repair techniques that enable a bug-sized aerial robot to sustain severe damage to the actuators, or artificial muscles, that power its wings — but to still fly effectively.


Read the full article at:

The NASA Pi Day Challenge

Can you use π (pi) to solve these stellar math problems faced by NASA scientists and engineers?


You may already know all about the mathematical constant pi (π) and how it can be used to calculate things like the circumference of a circle or the volume of a sphere. But did you know pi is also used all the time by NASA scientists and engineers to explore other planets? In this challenge, you can solve some of the same problems NASA scientists and engineers do using pi!




If you need some pi formulas here are the ones you might want to look at.

Read the full article at:

Most Innovative Companies List of 2023

Fast Company’s 2023 ranking of the World’s Most Innovative Companies features OpenAI at No.1, and covers 54 industries, from advertising, beauty, and retail to enterprise technology, design, and social impact.


Most Innovative Companies 2023, Fast Company’s definitive chronicle of the novel ideas transforming business and society, was 100% produced by people.


We may have asked ChatGPT—the AI chatbot created by our No. 1 company, OpenAI—about certain companies, but only because we wanted to see how it would reply. A very human impulse! OpenAI is just one example of how advances in artificial intelligence are reimagining corporate America, from drug discovery (DeepMind) to office work (Canva) to security (Robust Intelligence). But our ranking of the World’s 50 Most Innovative Companies—and the 54 lists that chronicle the 10 most innovative organizations in sectors from advertising to the workplace—showcase inspiring, insightful stories well beyond the current hot thing.


Healthcare is being made more equitable—for transgender individuals (Folx Health), women (Maven Clinic), children (Hazel Health), and lower-income patients (Cityblock Health)—by companies that are tailoring their offerings to communities that have traditionally been poorly served.


Iconic brands are changing how they communicate with fans, giving more power to creators and connecting with the culture (McDonald’sTiffany & Co.), while the entire world of restaurants and consumer packaged goods is being remade with content at its core (MrBeast).


On Earth, the soil is being fortified (Regrow Ag), the victims of climate disasters can now get a mobile grid to weather the disruption (Sesame Solar), and one of the world’s most influential brands, Patagonia, has made the planet its sole shareholder (Holdfast Collective).


Meanwhile, in space, public and private entities alike (NASAAxiom Space) are advancing what’s possible in orbit.


Finally, there’s the art collective (Mschf) commenting on consumer and business culture with a knowing wink.


We hope you find these winners as inspiring as we did while selecting them.

Read the full article at:

Breaking the scaling limits of analog computing

A new technique greatly reduces the error in an optical neural network, which uses light to process data instead of electrical signals. With their technique, the larger an optical neural network becomes, the lower the error in its computations. This could enable them to scale these devices up so they would be large enough for commercial uses.


As machine-learning models become larger and more complex, they require faster and more energy-efficient hardware to perform computations. Conventional digital computers are struggling to keep up.


An analog optical neural network could perform the same tasks as a digital one, such as image classification or speech recognition, but because computations are performed using light instead of electrical signals, optical neural networks can run many times faster while consuming less energy.


However, these analog devices are prone to hardware errors that can make computations less precise. Microscopic imperfections in hardware components are one cause of these errors. In an optical neural network that has many connected components, errors can quickly accumulate.


Even with error-correction techniques, due to fundamental properties of the devices that make up an optical neural network, some amount of error is unavoidable. A network that is large enough to be implemented in the real world would be far too imprecise to be effective.


MIT researchers have overcome this hurdle and found a way to effectively scale an optical neural network. By adding a tiny hardware component to the optical switches that form the network’s architecture, they can reduce even the uncorrectable errors that would otherwise accumulate in the device.


Their work could enable a super-fast, energy-efficient, analog neural network that can function with the same accuracy as a digital one. With this technique, as an optical circuit becomes larger, the amount of error in its computations actually decreases.  

“This is remarkable, as it runs counter to the intuition of analog systems, where larger circuits are supposed to have higher errors, so that errors set a limit on scalability. This present paper allows us to address the scalability question of these systems with an unambiguous ‘yes,’” says lead author Ryan Hamerly, a visiting scientist in the MIT Research Laboratory for Electronics (RLE) and Quantum Photonics Laboratory and senior scientist at NTT Research.


Hamerly’s co-authors are graduate student Saumil Bandyopadhyay and senior author Dirk Englund, an associate professor in the MIT Department of Electrical Engineering and Computer Science (EECS), leader of the Quantum Photonics Laboratory, and member of the RLE. The research is published today in Nature Communications.

Read the full article at: