New data storage powerhouse may just be … DNA!

Imagine Bach’s “Cello Suite No. 1” played on a strand of DNA.

This scenario is not as impossible as it seems. Too small to withstand a rhythmic strum or sliding bowstring, DNA is a powerhouse for storing audio files and all kinds of other media.

“DNA is nature’s original data storage system. We can use it to store any kind of data: images, video, music — anything,” said Kasra Tabatabaei, a researcher at the Beckman Institute for Advanced Science and Technology and a coauthor on this study.

Expanding DNA’s molecular makeup and developing a precise new sequencing method enabled a multi-institutional team to transform the double helix into a robust, sustainable data storage platform.

Read the full article at:

The Smallest MicroLED Display Ever Built: Is Much Smaller Than a Bug

Mojo Vision’s microLED display has record-breaking pixel density and a somewhat mysterious purpose.


A Silicon Valley-based startup has recently emerged from stealth mode to reveal what it claims is the smallest, most pixel-dense dynamic display ever built. Mojo Vision’s display is just 0.48 millimeters across, but it has about 300 times as many pixels per square inch as a typical smartphone display.


The display used microLED technology instead of OLEDs (as in several generations of Samsung devices and the iPhone X) or an LCD (as in every other iPhone). Made from gallium nitride, microLED displays can consume as little as 10 percent of the power of LCDs and are 5 to 10 times as bright as OLEDs. That combination makes them a good fit for head-up displays and other augmented reality applications.


Like other microLED companies looking to power augmented reality devices, Mojo Vision builds it gallium-nitride microLEDs as an array and then bonds the array to a silicon CMOS backplane that switches them on and off. Paul Martin, vice president for displays, says the company had to overcome several hurdles to build the 14,000 pixels-per-inch display. “The pixels are 1.3 [micrometers across], which means that the gap is only 0.5 µm. Smaller gaps creates harder and harder problems of fabrication.” He would not detail how the company overcame this problem and others.

Read the full article at:

Scientists map out the future of solar system exploration

Mars will stay in the spotlight through the 2030s thanks to a sample return mission. But in the 2040s, Uranus will take center stage, and Saturn’s moon Enceladus will steal the show in the 2050s. That’s according to the goals outlined in the latest decadal survey for planetary science.


With the release of “Origins, Worlds, and Life: A Decadal Strategy for Planetary Science and Astrobiology 2023-2032,” the planetary science community drafted a blueprint for how U.S. policymakers should invest limited resources available for space exploration. The decadal, the result of a steering committee that sought and received recommendations from the planetary science community, also helps set the international tone for solar system exploration.


And the mission priorities? A flagship orbiter and probe to study the ice giant Uranus. A flagship mission to Enceladus that will orbit the saturnian Moon, as well as land on it. A continuing program of Mars exploration, including a sample return mission. A host of other smaller mission possibilities. And, last but not least, defending our home world against asteroid impacts.


“The decadal,” as it’s typically referred to, is the blueprint for funding agencies like NASA and the National Science Foundation. And because American-led missions often include international instruments and investigators, the decadal has resonance for the global planetary science community.

Read the full article at:

The future of creativity: Adobe delves into the metaverse and creating new collaborative communities

Since the Covid-19 pandemic it’s unlikely that creativity and creative work will ever look the same again. Adobe investigates how we can enter this new phase with a positive, collaborative, and innovative approach.

If there’s one thing we can be certain of, it’s that the past two years have thrown the creative industry into whole new territories. The changes we’ve seen to how creatives work and to their practice are broad and expansive. But Scott Belsky, Adobe’s chief product officer opened at the company’s Future of Creativity event with some positive forward thinking: “I feel like there has never been a better, more exciting time to be a creative person.” Following on from the last Future of Creativity event in 2019, and being four years since Scott attended a community event in London, he was the first to express how happy he was to be able to get back together as a creativity community. “A lot has changed”, Scott states “but so many of the changes have been for the better.”

Read the full article at:

Imagen: Text-to-Image Diffusion Models

We present Imagen, a text-to-image diffusion model with an unprecedented degree of photorealism and a deep level of language understanding. Imagen builds on the power of large transformer language models in understanding text and hinges on the strength of diffusion models in high-fidelity image generation. Our key discovery is that generic large language models (e.g. T5), pretrained on text-only corpora, are surprisingly effective at encoding text for image synthesis: increasing the size of the language model in Imagen boosts both sample fidelity and image-text alignment much more than increasing the size of the image diffusion model. Imagen achieves a new state-of-the-art FID score of 7.27 on the COCO dataset, without ever training on COCO, and human raters find Imagen samples to be on par with the COCO data itself in image-text alignment. To assess text-to-image models in greater depth, we introduce DrawBench, a comprehensive and challenging benchmark for text-to-image models. With DrawBench, we compare Imagen with recent methods including VQ-GAN+CLIP, Latent Diffusion Models, and DALL-E 2, and find that human raters prefer Imagen over other models in side-by-side comparisons, both in terms of sample quality and image-text alignment.

Read the full article at:

DALL·E 2 Will Disrupt Art Deeper Than Photography Did

The end of art or a new beginning?

Are we going to soon witness the creative genius of a virtual Picasso or Leonardo da Vinci?

We could be on the way with the latest trend in the AI field: visual generative models. While systems like GPT-3 create text from text, others like DALL·E 2 — a wordplay between Spanish painter Salvador Dalí and Pixar’s cute robot WALL·E — can create visual art from words. As you can see from the cover image of this article, this tech gives a new meaning to the idiom “an image is worth a thousand words.”

Read the full article at: