Amazon researchers have published a paper revealing what they have found to be a better way to select an architecture when crafting an artificial intelligence (AI) model.
In “On the Bounds of Function Approximations,” the researchers explained that they were trying to establish a framework to better understand the computational bounds of Neural Architecture Search (NAS) in relation to its search space. During their research, they used techniques related to any computational model as long as it functioned in the same way as a Turing machine.
“Selection of a neural architecture is unlikely to provide the best solution to a given machine learning problem, regardless of the learning algorithm used, the architecture selected, or the tuning of training parameters such as batch size or learning rate,” said Adrian de Wynter, a research engineer with Alexa AI’s Machine Learning Platform Services organization and a lead author on the paper, according to Venture Beat. “Only by considering a vast space of possibilities can we identify an architecture that comes with theoretical guarantees on the accuracy of its computations.”
The researchers found that the best strategy involved creating tailored architectures in order to guarantee Turing equivalence. In addition, the models should be found via an automated search that uses procedures to design architectures for select tasks.
“The paper’s … immediately applicable result is the identification of genetic algorithms — and, more specifically, coevolutionary algorithms … whose performance metric depends on their interactions with each other — as the most practical way to find an optimal (or nearly optimal) architecture,” wrote Wynter. “Based on experience, many researchers have come to the conclusion that coevolutionary algorithms provide the best way to build machine learning systems. But the function-approximation framework from my paper helps provide a more secure theoretical foundation for their intuition.”