As someone who has spent a lot of my career working in AI, this actually really saddens me.
Over the last decade or two, 'machine learning' has become the big go-to AI technique, producing all sorts of useful results. It has become the industry, if not 'standard', at least 'buzzword' for how to solve pretty much any problem.
It has a huge downside though. Researchers learn almost nothing from it. From an industrial perspective such systems tend to be useful, you figure out what inputs to give it, optimize where you can, throw as much CPU and memory at it as you can afford, and magic comes out the other side. But ask people who work in the field how it came to its conclusions or what models they applied and you tend to get blank stares. People are learning to set up black boxes, but research into how things actually work or what underlying patterns exist has really been falling out of favor.