It seems I’m not the only researcher who has doubts about the quality of claims made by late-seen startups. Reasoning /= Transfomer Models. Transformer Models predict the next word(s) according to a certain likelihood. This leads to impressive results in terms of structural integrity but without any claims on information quality. Nowadays if you enter a word in a chat, every phone (the little bar above the keyboard) has the ability to predict the next word (not in the quality of a transformer model) but it’s a similar mechanism in its simplest form. You also would not entrust this with reasoning capabilities either.