Yellow crazy ant males have two sets of DNA

Multicellular organisms typically develop from a single cell into a collection of cells that all have the same genetic material. A research team now discovered a deviation from this developmental hallmark in the yellow crazy ant, Anoplolepis gracilipes. Males are chimeras of haploid cells from two divergent lineages: R and W. R cells are overrepresented in the males’ somatic tissues, whereas W cells are overrepresented in their sperm. Chimerism occurs when parental nuclei bypass syngamy and divide separately within the same egg. When syngamy takes place, the diploid offspring either develops into a queen when the oocyte is fertilized by an R sperm or into a worker when fertilized by a W sperm. This study reveals a mode of reproduction that may be associated with a conflict between lineages to preferentially enter the germ line.

Read the full article at: phys.org

AI develops specific cancer treatment in just 30 days, predicts survival rate

Artificial intelligence has developed a treatment for cancer in just 30 days and can predict a patient’s survival rate. In a new study published in the journal Chemical Science, researchers at the University of Toronto along with Insilico Medicine developed a potential treatment for hepatocellular carcinoma (HCC) with an AI drug discovery platform called Pharma.AI.

 

HCC is the most common type of liver cancer and occurs when a tumor grows on the liver, according to Cleveland Clinic. Researchers applied AlphaFold, an AI-powered protein structure database, to Pharma.AI to uncover a novel target — a previously unknown treatment pathway — for cancer and developed a “novel hit molecule” that could bind to that target without aid.

 

The creation of the potential drug was accomplished in just 30 days from the selection of the target and after synthesizing just seven compounds. After a second round of generating compounds, they discovered a more potent hit molecule — but any potential drug would need to go through clinical trials before widespread use.

Read the full article at: nypost.com

Winners of the First AI Film Festival – Showcasing the Latest Tools in AI Filmmaking

The winners of the first annual AI film festival Celebrate the artists making the impossible AI filmmaking.

 

Runway is one of the leading companies in the development of artificial intelligence tools and introduced the winners of their first annual AI film festival that took place this winter. Of the hundreds of submissions, judges picked ten finalists and released their work to the public. The main goal of this competition was to celebrate “the art and artists making the impossible at the forefront of AI filmmaking.” We were curious and analyzed how different techniques were integrated into the winning films: from AI-generated art to whole 3D scene scans. Let’s take a look at the amazing new technology now available to any creator.

 

Requirements were that videos were from 1 to 10 minutes long, and one of the main festival criteria was naturally to use neural networks in the work. There was no strict definition of which AI to use, or how to feature it in the film, so the variety of tools used in the winning videos is really impressive. The use of state-of-the-art technology counted as only 25% of a film’s success – judges also took into account the quality of the overall film composition, originality, and of course, the artistic message.

Using art generators as part of AI filmmaking

One of the shorts that impressed me most is “PLSTC” by Laen Sanches (we will embed it below). Basically, it’s just a rapidly edited sequence of hundreds of pictures, which illustrate different ocean inhabitants wrapped in plastic and unable to escape. The director took strong images, created by AI art generator Midjourney, upscaled them with help of the AI tool Topaz Labs, and put them together slightly animated. By precisely choosing the matching visuals, he achieved a very definite and coherent film atmosphere. Not a word is said, but the message is crystal clear, and it hurts. Dramatic classical music also helps evoke deep emotions, and the result is a small narrative wonder. It didn’t place for any of the prizes, but it is definitely worth watching.

Read the full article at: www.cined.com

Giant array of low-cost telescopes could speed hunt for radio bursts, massive black holes

When the immense Arecibo radio telescope in Puerto Rico collapsed in 2020, it left gaping holes in astronomy. Now, a team from the California Institute of Technology (Caltech) hopes to address some of the gaps with a very different instrument: a tightly packed array of relatively inexpensive radio dishes that aims to quickly image radio sources across wide swaths of the sky. A nearly completed prototype array in California that the team calls a “radio camera” is already locating dozens of the distant, enigmatic eruptions called fast radio bursts (FRBs). Next year, the team hopes to begin construction on a much larger array with 2000 dishes that, together, will match the size of Arecibo.

 

Maura McLaughlin of West Virginia University is a leader of NANOGrav (the North American Nanohertz Observatory for Gravitational Waves), an effort to search for gravitational waves from supermassive black holes that relied on Arecibo for half its data. She says they took “a big sensitivity hit” when it was lost. “We really need a new telescope with a similar collecting area,” she says, and Caltech’s planned Deep Synoptic Array (DSA) fits that bill. “It will be a game changer.”

 

To gain sensitivity, radio astronomers can build big dishes like Arecibo or arrays of smaller dishes. But in most such arrays, the dishes are widely spaced, which sharpens their resolution but creates “a data deluge problem,” says Caltech’s Gregg Hallinan, DSA principal investigator (PI). Producing an image from a scattered array is like looking through a fragmented mirror, he says, and recreating the information from the missing parts is a complex nonlinear process known as deconvolution that can take weeks—or even years.

 

Many astronomers just want to regularly survey the sky for new objects or monitor sources for subtle changes without a heavy processing burden. Caltech’s solution, Hallinan says, is to “fill the mirror up” by packing low-cost dishes together. That makes deconvolution easier and should enable DSA to construct images in real time. The team has nearly finished assembling its prototype, the DSA-110, a T-shaped array of 95 dishes spaced 1 meter apart at Caltech’s Owens Valley Radio Observatory in California plus another 15 “outriggers” spread out more than a kilometer distant. To keep construction costs to $4 million, the instrument uses commercially available 4.6-meter dishes, homemade amplifiers, and wave-channeling feeds fashioned out of cake tins. Most radio telescopes require expensive cryogenic cooling to reduce amplifier noise, but Caltech’s engineers have squeezed similar performance out of room-temperature circuits. Co-PI Vikram Ravi admits they perform less well in the summer heat.

 

With a wide field of view, DSA-110 is good at detecting FRBs, intense blasts of radio waves lasting only milliseconds, coming from all over the sky. Several thousand have been detected, but little more than a dozen have been traced to their home galaxies, which might hold clues to what is powering the bursts. DSA-110 aims to localize many more. If a burst is detected, data from the outrigger dishes allow the telescope to zoom in and pin the FRB to its galaxy.

Read the full article at: www.science.org

How do we smell? In a first, scientists created a molecular-level, 3D picture of how an odor molecule activates a human odorant receptor

Breaking a longstanding impasse in our understanding of olfaction, scientists at UC San Francisco (UCSF) have created the first molecular-level, 3D picture of how an odor molecule activates a human odorant receptor, a crucial step in deciphering the sense of smell.

 

The findings, appearing online March 15, 2023, in the journal Nature, are poised to reignite interest in the science of smell with implications for fragrances, food science, and beyond. Odorant receptors — proteins that bind odor molecules on the surface of olfactory cells — make up half of the largest, most diverse family of receptors in our bodies; A deeper understanding of them paves the way for new insights about a range of biological processes.

 

“This has been a huge goal in the field for some time,” said Aashish Manglik, MD, PhD, an associate professor of pharmaceutical chemistry and a senior author of the study. The dream, he said, is to map the interactions of thousands of scent molecules with hundreds of odorant receptors, so that a chemist could design a molecule and predict what it would smell like.

 

“But we haven’t been able to make this map because, without a picture, we don’t know how odor molecules react with their corresponding odor receptors,” Manglik said.

 

Read the full article at: www.ucsf.edu

Resilient bug-sized robots keep flying even after wing damage

Bumblebees are clumsy fliers. It is estimated that a foraging bee bumps into a flower about once per second, which damages its wings over time. Yet despite having many tiny rips or holes in their wings, bumblebees can still fly.

 

Aerial robots, on the other hand, are not so resilient. Poke holes in the robot’s wing motors or chop off part of its propellor, and odds are pretty good it will be grounded.

 

Inspired by the hardiness of bumblebees, MIT researchers have developed repair techniques that enable a bug-sized aerial robot to sustain severe damage to the actuators, or artificial muscles, that power its wings — but to still fly effectively.

 

Read the full article at: news.mit.edu

The NASA Pi Day Challenge

Can you use π (pi) to solve these stellar math problems faced by NASA scientists and engineers?

 

You may already know all about the mathematical constant pi (π) and how it can be used to calculate things like the circumference of a circle or the volume of a sphere. But did you know pi is also used all the time by NASA scientists and engineers to explore other planets? In this challenge, you can solve some of the same problems NASA scientists and engineers do using pi!

 

GO TO EDUCATOR GUIDES

 

If you need some pi formulas here are the ones you might want to look at.

Read the full article at: www.jpl.nasa.gov

OpenAI releases GPT-4, a multimodal AI that it claims is state-of-the-art

After months of anticipation, OpenAI has released a powerful new image- and text-understanding AI model, GPT-4, that the company calls “the latest milestone in its effort in scaling up deep learning.”

GPT-4 is available today via OpenAI’s API with a waitlist and in ChatGPT Plus, OpenAI’s premium plan for ChatGPT, its viral AI-powered chatbot.

Read the full article at: techcrunch.com