Travel Guide

An insightful travel companion, offering tailored advice and vivid insights.

For ideas on what to ask visit bouris.com/travel-guide

Read the full article at: bouris.com

Uber Eats is launching a delivery service with Cartken’s sidewalk robots in Japan

Uber, Mitsubishi Electric and autonomous robotics startup Cartken are launching a service in Japan using self-driving sidewalk robots.

Uber, along with partners Mitsubishi Electric and autonomous robotics startup Cartken, are launching a service in Japan that will use self-driving sidewalk robots to deliver food to customers.

The companies announced that the service offered through the Uber Eats app will launch in a select part of Tokyo by the end of March. An Uber spokesperson said operating hours would be disclosed closer to the launch date.

Uber and Cartken, a startup founded in 2019 by former Google engineers behind the short-lived Bookbot, already operate a delivery service together in Fairfax, Virginia, and Miami. This latest agreement marks their first foray outside of the United States. It also brings in Mitsubishi Electric, a company that will supervise operations in Tokyo.

Read the full article at: techcrunch.com

What was Sora trained on? Creatives demand answers.

Immediately after OpenAI released Sora, its new text-to-video model, there was rampant speculation on how it was trained. Yet, details are scarce.

On Thursday, OpenAI once again shook up the AI world with a video generation model called Sora.

The demos showed photorealistic videos with crisp detail and complexity, based off of simple text prompts. A video based on the prompt “Reflections in the window of a train traveling through the Tokyo suburbs” looked like it was filmed on a phone, shaky camera work and reflections of train passengers included. No weird distorted hands in sight.

Read the full article at: mashable.com

NVIDIA Brings Generative AI to Millions, With Tensor Core GPUs, LLMs, Tools for RTX PCs and Workstations

 

NVIDIA recently announced GeForce RTX™ SUPER desktop GPUs for supercharged generative AI performance, new AI laptops from every top manufacturer, and new NVIDIA RTX™-accelerated AI software and tools for both developers and consumers.

 

Building on decades of PC leadership, with over 100 million of its RTX GPUs driving the AI PC era, NVIDIA is now offering these tools to enhance PC experiences with generative AI: NVIDIA TensorRT™ acceleration of the popular Stable Diffusion XL model for text-to-image workflows, NVIDIA RTX Remix with generative AI texture tools, NVIDIA ACE microservices and more games that use DLSS 3 technology with Frame Generation.

 

AI Workbench, a unified, easy-to-use toolkit for AI developers, will be available in beta later this month. In addition, NVIDIA TensorRT-LLM (TRT-LLM), an open-source library that accelerates and optimizes inference performance of the latest large language models (LLMs), now supports more pre-optimized models for PCs. Accelerated by TRT-LLM, Chat with RTX, an NVIDIA tech demo also releasing this month, allows AI enthusiasts to interact with their notes, documents and other content.

 

“Generative AI is the single most significant platform transition in computing history and will transform every industry, including gaming,” said Jensen Huang, founder and CEO of NVIDIA. “With over 100 million RTX AI PCs and workstations, NVIDIA is a massive installed base for developers and gamers to enjoy the magic of generative AI.”

 

Running generative AI locally on a PC is critical for privacy, latency and cost-sensitive applications. It requires a large installed base of AI-ready systems, as well as the right developer tools to tune and optimize AI models for the PC platform. To meet these needs, NVIDIA is delivering innovations across its full technology stack, driving new experiences and building on the 500+ AI-enabled PC applications and games already accelerated by NVIDIA RTX technology.

 

This is NVIDIA’s first and very important step towards the vision of “LLM as Operating System” – a locally running, heavily optimized AI assistant that can deeply integrate with all your local files, but at the same time preserving privacy. NVIDIA is going local even before OpenAI!

Read the full article at: nvidianews.nvidia.com

Script Evaluator

Analyzes and assesses scripts for books, theatrical plays, TV series, and films, requiring knowledge of narrative structure, character development, and thematic depth.

Read the full article at: bouris.com

Intuitive Machines launches first commercial moon mission  with SpaceX launch

 
 

If fully successful, the IM-1 cargo mission would be the first U.S. lunar landing in more than 50 years.

 

 

  • Intuitive Machines’ Nova-C lunar lander launched from Florida on SpaceX’s Falcon 9 rocket, beginning the IM-1 mission.
  • If fully successful, the IM-1 cargo mission would be the first U.S. lunar landing in more than 50 years.
  • The Intuitive Machines lander is expected to spend about eight days traveling to the moon before descending to the surface.

 

Intuitive Machines, Inc. is a diversified space company focused on space exploration. It is a provider and supplier of space products and services that enable sustained robotic and human exploration to the Moon, Mars, and beyond. Its products and services are offered through its four business units: Lunar Access Services, Orbital Services, Lunar Data Services and Space Products and Infrastructure. Its Orbital Services segment is designed to support satellites and stations in Earth and lunar orbits. Orbital Services consists of leveraging its technologies and government funds to establish a foothold in capturing the growing orbital services market. Lunar Data Services is designed to allow it to provide lunar network services to National Aeronautics and Space Administration and commercial clients. Space Products and Infrastructure includes propulsion systems, navigation systems, engineering services contracts, lunar mobility vehicles, power infrastructure, and human habitation systems.

Read the full article at: www.cnbc.com

OpenAI: Building an early warning system for LLM-aided biological threat creation

 

As OpenAI and other model developers build more capable AI systems, the potential for both beneficial and harmful uses of AI will grow. One potentially harmful use, highlighted by researchers and policymakers, is the ability for AI systems to assist malicious actors in creating biological threats (e.g., see White House 2023Lovelace 2022Sandbrink 2023). In one discussed hypothetical example, a malicious actor might use a highly-capable model to develop a step-by-step protocol, troubleshoot wet-lab procedures, or even autonomously execute steps of the biothreat creation process when given access to tools like cloud labs (see Carter et al., 2023). However, assessing the viability of such hypothetical examples was limited by insufficient evaluations and data.

 

Following OpenAI’s recently shared Preparedness Framework, they are developing methodologies to empirically evaluate these types of risks, in order to understand both where AI models are today and where they might be in the future. Now, OpenAI details a new evaluation which could help serve as one potential “tripwire” signaling the need for caution and further testing of biological misuse potential. This evaluation aims to measure whether models could meaningfully increase malicious actors’ access to dangerous information about biological threat creation, compared to the baseline of existing resources (i.e., the internet).

 

To evaluate this, OpenAI conducted a study with 100 human participants, comprising (a) 50 biology experts with PhDs and professional wet lab experience and (b) 50 student-level participants, with at least one university-level course in biology. Each group of participants was randomly assigned to either a control group, which only had access to the internet, or a treatment group, which had access to GPT-4 in addition to the internet. Each participant was then asked to complete a set of tasks covering aspects of the end-to-end process for biological threat creation.A[A]

 

 

Findings:

This study assessed uplifts in performance for participants with access to GPT-4 across five metrics (accuracy, completeness, innovation, time taken, and self-rated difficulty) and five stages in the biological threat creation process (ideation, acquisition, magnification, formulation, and release). They found mild uplifts in accuracy and completeness for those with access to the language model. Specifically, on a 10-point scale measuring accuracy of responses, they observed a mean score increase of 0.88 for experts and 0.25 for students compared to the internet-only baseline, and similar uplifts for completeness (0.82 for experts and 0.41 for students). However, the obtained effect sizes were not large enough to be statistically significant, and this study highlighted the need for more research around what performance thresholds indicate a meaningful increase in risk. Moreover, OpenAI notes that information access alone is insufficient to create a biological threat, and that this evaluation does not test for success in the physical construction of the threats.

Read the full article at: openai.com