Back in April 2022 I published here a brief item about Japan's generally unremembered Fifth Generation Computer Systems Initiative from the standpoint of that initiative's fortieth anniversary (which fell on that very month).
Much hyped at the time, it was supposed to deliver the kind of artificial intelligence toward which we generally still felt ourselves to be straining at that time.
Writing that item my principal thought was for the overblown expectations people had of the program. However, in the wake of more recent work on Large Language Models, like OpenAI's GPT-4, it seems that something of what the fifth-generation computing program's proponents anticipated is at the least starting to become a reality.
It also seems notable that even if fourth-generation computing has not been replaced by fundamentally new hardware, or even shifted the material substrate of the same fourth-generation design from silicon to another material (like the long hoped-for carbon nanotube), we have seen a different chip concept--employed in a specialty capacity rather than as a replacement for fourth-generation computing--play a key role in progress in this field, "AI" (Artificial Intelligence) chips. Indeed, just as anticipated by those who had watched the fifth-generation computing program's development, parallel processing has been critical to the design of these chips for "pattern recognition," and the acceleration of the programs' training.
In the wake of all that, rather than regarding fifth-generation computing as a historical curiosity one may see grounds for it simply having been ahead of its time--and deserving of more respect than it has had to date. Indeed, it may well be that somewhere in the generally overlooked body of research produced in the course of its development there are insights that could power our continued progress in this field.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment