My interest in AI goes back to the start of my career in (ugh) 1985. The IBM PC had just become a thing, but I was programming on a mainframe. In the early 90s, I toyed with "expert systems", poorly performing knowledge bases. Neural networks had well developed theories, but the hardware of the day limited usefulness. I spent some time working on simple genetic algorithms. I wrote a simple working GA I called genetica. It could find optimized solutions to whatever fitness function you defined, but unless the pseudo-random gods smiled on you, it would usually get stuck on a local min/max and not really find the best solution. Genetica was less robust than statistical methods, but I implemented it several times over the years as an exercise to learn a new computer language. There is a wealth of academic work I can build on to go far beyond genetica.
At the end of 2023, I retired from my "regular" job and plan to pursue a number of interesting projects around machine learning, LLMs, and GAs. I've spent the last year brushing up on python and learning the basics of the scikit-learn library. I started a project to add web search and math to GPT 3.5-turbo using the API, but that is already obsolete now that those features are built-in.
I've been running small open source LLMs at home, but only recently starting getting useful results after following Eric Hartford's wonderful article on running dolphin. I expect the next decade to be exciting.