AINews

Robotic AI scientists who can conduct tests independently are being produced by Tetsuwan Scientific

When Cristian Ponce first met Théo Schäfer, his co-founder, he was dressed as Indiana Jones. Entrepreneur First, a startup organization that pairs creators before they debut their ideas, hosted the Halloween party in 2023.

Ponce recalls that the two got along well. Schäfer had worked at NASA’s Jet Propulsion Lab searching for extraterrestrial life on Jupiter’s moons and had earned a master’s degree in underwater autonomous robots from MIT. “Very strange stuff,” Ponce smiles. He worked on E. coli at Cal Tech, where he said, “I was coming from, doing bioengineering.”

Over tales of the mundane work of being a lab technician, the two became close. In particular, Ponce (shown above left) grumbled about the amount of human labor required for genetic engineering. The humble lab technician can manually transfer liquids from tube to tube for hours using a scientific syringe known as a “pipette.”

Because the robots that can automate the process are specialized, costly, and require specialist programming abilities, attempts to do so have not been successful. The scientists would have to wait for the programmer to program the bot, debug it, and so on each time they needed to alter the settings of an experiment, which is constantly the case. Generally speaking, using a human is more accurate, less expensive, and easier.

Tetsuwan Scientific, founded by cofounders, aimed to modify low-cost white-label lab robots. In May 2024, they watched OpenAI’s multi-model product launch, featuring Scarlett Johansson’s voice.

 Police said

“We’re looking at like this crazy breakneck progress of large language models right before our eyes, their scientific reasoning capabilities,” 

Following the demonstration, Ponce launched GPT 4 and displayed a DNA gel image of it. In addition to correctly interpreting the image, the model also detected an issue: an unwanted DNA fragment called a primer dimer. After that, it provided a thorough scientific explanation of what caused it and how to change the circumstances to stop it.

Ponce characterized it as a “light bulb moment” in which LLM models had ” no physical agency to actually perform the suggestions that they’re making,” despite previously being able to diagnose scientific outputs.

The co-founders were not the only ones investigating the application of AI to scientific research. Although Ross King’s robot “Adam & Eve” dates back to 1999, robotic AI researchers truly got their start in 2023 with a number of scholarly publications.

Tetsuwan’s study revealed that the issue was that there was no software that “translated” scientific intent—what the experiment was searching for—into robotic execution. The robot, for example, is unable to comprehend the physical characteristics of the liquids it is pipetting. 

“That robot lacks the necessary background knowledge. Perhaps the liquid is viscous. Perhaps it will crystallize. Thus, we must tell it,” he stated. Audio LLMs can work with things “that are hard to hard” because RAG suppresses hallucinations.

Read More News

  1. What Is mm2 club in roblox?
  2. How To Get Voice Chat On Roblox?
  3. Apple Was Sued For Discontinuing iCloud’s CSAM Detection

News Source

Avijit Sah

Avijit Sah is a digital marketing expert specializing in SEO, social media, and content strategy. With a passion for helping businesses grow online, Avijit Sah uses data-driven tactics to boost visibility and engagement. Follow Avijit for the latest digital marketing tips and insights.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button