AI Learns Common Sense from Touch, Not Just Vision | HackerNoon
Briefly

The experimental evaluation of OCTOPI highlighted its performance in physical understanding tasks, comparing two versions: OCTOPI-7b and OCTOPI-13b. Both models were assessed using various metrics like accuracy and success rate in tasks focused on physical properties. Results indicated that OCTOPI-13b outperformed the smaller model across all evaluated tasks, suggesting a direct correlation between the size of the language model and its performance capabilities. The use of detailed physical property descriptions during training contributed positively to this enhanced understanding, positioning OCTOPI favorably for practical applications in robotic scenarios.
The performance improvement of OCTOPI-13b over OCTOPI-7b in various physical understanding tasks indicates a significant correlation between model size and understanding accuracy, underscoring the benefits of utilizing larger LLMs in complex tasks.
Our evaluations show that, when trained with physical property descriptions, both OCTOPI versions excelled in accuracy across all physical understanding tasks, reinforcing the importance of effective training methods in AI models.
Read at Hackernoon
[
|
]