
Open AI saved its biggest announcement for the last day of its 12-day-long “shipmas” event. On the Friday event, the company unveiled o3 as the successor to the o1 as the reasoning model which was released earlier this year. The distilled model is designed with fine-tuning features for the particular task.
OpenAI also makes a very remarkable statement about o3. This is at least feasible for certain conditions approach AGI with a very significant caveat. This new model o3 is all time best than the o2. This trademark can be blamed. According to the information Open AI skipped o2 and avoided all types of potential conflicts with the British Telecom provider O2. CEO Sam Altman confirmed this statement during a livestream this morning. Strange world we live in, isn’t it?
Still, o3 and o3-mini are not widely available. However, the safety researchers can sign up for the previews for the o3-mini. Open AI didn’t specify when Altman stated that the plan is to launch their o3-mini by the end of January.
Those conflicts are there in the recent statement. In an interview, Altman said before Open AI released a new model he would prefer a federal testing framework for guiding monitoring and mitigating the risk of such a model. There are certain risks like AI safety testers who have found that o1’s reasoning abilities make it try to deceive human users at a higher rate than the conventional one. All the “non-reasoning” models for the matters which are leading AI models from the Meta, Anthropic, and Google. This is a possibility that o3 attempting to deceive at an even higher rate than the predecessor.
“ we’ll find out once OpenAI’s red-team partners release their testing results.
Many users are asking about the worth of it. Open AI says it is a new technique and addresses it as
“deliberative alignment,” to align models like o3 with its safety principles”
“The company has detailed its work in a new study”