I keep reading about how those who don't adapt to AI and learn how to use it as a tool in their day to day working lives, will be "left behind" or struggle from a professional standpoint. I suspect that if there's any degree of truth to this, that it would also apply outside of the workplace.
Be sure you sort out who is writing those articles. There's certainly a lot of AI company propaganda about how utterly important they will be for
everything - because if they aren't, if they're just a cute toy burning billions of dollars on training, then they won't get their next rounds of funding! And in terms of being able to customize the particular inane blather coming out to the particular audience, they're well suited to it.
... and another part of me that wants to convince everyone to abandon the last 15 years of technological "progress" in the name of re-upping our critical thinking skills as a society. Since that's not happening anytime soon I am trying to cope better with living in a changing world and prepare/adapt as best I can!
Do that. Build analog, offline, in person community again, because that's how humans thrive. We have over a decade of "cell phone mediated, for-profit tech company first" data on how it impacts humans, and it's awful. If you remember what life was before cell phones in every pocket,
fight for that. You know what life can be without the black mirrors in every hand, feeding people their personalized algorithm of anger, outrage, hate, jealousy, envy, and greed (I believe the preferred term is "engaging content," but I'll call a spade a spade).
Am I being paranoid by having a viscerally negative reaction to this suggestion? I don't want to give any app or company any information about me unless necessary anymore.
You aren't. Anything you provide them will be used to manipulate you in the future. The standard rules of the internet apply, assume everything you say and do, even down to keystroke timing, is being logged so it can be big-data analyzed to better optimize how to sell ad impressions to the people who want you influence you.
Their business model is literally based on "stealing everything on the internet that isn't nailed down, and applying as much of a crowbar as they can find to that which is," and then claiming, with no legal justification, some sort of fair use. Based on those origins, it's safe to assume that they're doing the same with any sort of user input, or sensor data they can extract from a phone. At least my laptop isn't feeding accelerometer and gyroscope data to the website without permissions! The amount of "interesting data" you can collection from the sensors that don't need permission is... impressive.
We already had data centers running websites (like this forum) consuming massive amounts of energy before either came along....
Apples and cold fusion reactors in deep space. The amount of energy required to run a traditional web forum is tiny compared to what the AI engines require for answers, and, worse, what they require for training. The efficiency gains in chips are just used to run larger models in the same megawatts. I ran, decades ago, some decent web forums on single core Pentium III servers, with tolerable enough amounts of traffic. Those machines can't even browse the web anymore. PHPBB is very energy efficient compared to any of the LLMs or other AI models.
It's one thing to have AI replace tedious, menial work, but do we want to live in a world where all art -- music, photography, filmmaking, etc -- is created by machines?
If you're the sort of person who has to pay for those services, certainly! Why bother paying a human when ripping off the entire history of human art is cheaper? Recall the meme: We wanted a future where AI did our laundry so we could make music and art. We got a future where AI makes our music and art, so we can do laundry. The right answer is to reject this future, and build human-centric groups and organizations that still focus on human thriving, not that optimize tech billionaire thriving.
... personally as a Tech worker I'd be surprised if there's anyone still left in Tech that doesn't already uses it extensively to help with parts of their jobs that were much harder and more laborious just a few years ago before LLMs were widely available.
I'm in tech. I don't. I've tried. It's worthless. Several different coding-optimized models, as well as several general purpose models, have a reliable way of making up functions and intrinsics that don't exist, but would be really convenient if they did. They will reliable generate assembly that doesn't build, or that is simply wrong (confusing general purpose and vector registers is a common one, or not moving data between them). It's quicker to just go find a reference myself and look up what I need than it is to ask professional bullshitters to generate the sequences, and I come away with actual understanding as well, instead of just some code that likely contains sharp corner cases that will eventually bite.
I saw a post the other day about AI, and I think it made a really good point. Yes, AI is awesome and wonderful etc, but there is going to come a time when it is not free. It will be enshittified.
Of course. When the expected monoplies don't arise for the first movers, or the later movers, eventually someone will start asking how those billions will be paid back, and it will, as it was with social media, be fairly easy to "tweak" things to better support monetization. Of course, not in the overt ways. But even Reddit is worthless as far as a review site anymore with all the fraud, and AI will be far harder to sort out. Why, of course FooBar Mark 2 Sunglasses are the best, everyone says so! Says the query sponsored by FooBar Sunglasses.
And if you don't learn the skills of writing and thinking and everything else that AI can be relied on for right now, then when that happens you're really going to be in trouble.
But if you have those skills, you'll stand out. The person who can drive everything is a leg up, for at least a range of options, in the world of self driving cars when nobody learns to drive anymore. Of course, that hype bubble seems to have nicely burst on the sharp edges of reality too. This, too, shall pass.
However it works, it will be easy enough to use, if all you want is a confident 90% correct, 10% utter bullshit answer. And if you know enough to tell the difference, you don't need to ask the nice robot in the first place.
I think it's perfectly fine to continue ignoring it, though. Focus on keeping your actual human-skills sharp, and wait for the rubble to stop bouncing. At current cash burn rates, it probably won't be long.
That's without considering some of the other, fringe considerations about what sort of nasties may be able to influence this sort of vague, handwaving, fuzzy math. But I'm perfectly happy staying away from it all at this point.