Lately I have been in a deep dark hole. Not literally, but it’s how I’ve felt.

I’m toiling away getting content ready to launch my new data project, Data 4 the People (click here for my super cool “coming soon” page and branding courtesy of Garrett McLeod at Hammock Creative).

My initial topic of choice is the SNAP program (i.e., food stamps) which has recently been vaulted into the spotlight for its reckless payment of benefits to tons of dead people and fraudsters. 🤦🏻‍♂️

Sorry, that’s me being sarcastic, because I just get so upset when people use narrative to hurt others, rather than respecting its power and only using it to tell a complete story.

Anyway, my main finding so far is that SNAP does indeed have problems. But it’s not the problems that these rabble-rousers are highlighting.

Rather, it’s the opposite.

What I’ve found is that SNAP coverage is not keeping up with poverty in most counties. In some counties, there have been so many barriers erected at the state and local level that SNAP is all but non-existent. And whether these barriers exist or not seems to largely boil down to politics. If the state is completely controlled by Republicans (i.e., “a trifecta”), its counties are likely to have lower SNAP participation relative to their poverty level, and vice versa for Democratic controlled states. Just sad, sad, sad, as it didn’t used to be this way.

But it is what it is.

At least when I release this, the data will be out there for people to use and come to their own judgements. And maybe, just maybe, some people will look at this data and use it to make changes to policy to help people in desperate need of help.

Anyway, that’s not the point of this post.

The point is this.

While in this deep dark data hole, I constructed several data visualizations that brought up all sorts of questions. Like, “what has Illinois done to increase it’s SNAP access,” and “was policies has Arkansas enacted to cut its SNAP program in half over the last ~10 years despite its poverty level not dropping by much?”

And poof, Gemini (my AI engine of choice lately) just spits out answers with perfect clarity, documentation, and referencing. That leads to follow on questions, and within an hour, I pretty much know exactly what policy fixes to recommend to a state who wants to increase SNAP access.

AI turned me into a SNAP expert in an hour. That’s powerful.

But, that only happened because I knew what (very precise) questions to ask. And in this case, knowing what questions to ask took a week of data work for someone who has been honing this skill for decades.

So, how did I get to the point where I knew what questions to ask AI to achieve this creative efficiency that our tech overlords are promising? Well, I have to rewind back years and years and reflect on the countless hours I spent first learning how to answer mundane questions as an entry level chemical engineer, and after that, an entry level investor, and then after that, and entry level data analyst. I probably spent a decade learning how to answer other people’s questions before I felt comfortable enough to start answering my own. It was an endless process of trial and error that gave me the experience I needed to now reap the benefits of AI.

So, that’s great for me. But what about for my kids?

This is where I get frustrated.

Our tech overlords are selling some promise land where everyone can use AI the way I am using it (which is transformational, to be clear). But the people that are selling us this story, like me, learned how to answer questions before they graduated to asking them. They put in the mundane work to learn how to think creatively, and now are projecting their lived experience onto all others to justify selling them stuff.

But this is clearly flawed logic. If my 11-year old son grows up immersed in AI, which is solving all his problems for him before his brain is fully developed, how will he build the skills to know how to ask productive and helpful questions? I don’t see how it’s possible.

Breathing underwater… or drowning?

I love my analogies so I’ll offer one up here.

Let’s say that the people that have accumulated the experience to know how to use AI to enhance their creativity are people that have evolved to be able to breathe underwater. As technology has gotten better, the quality of the water has moderately improved. But now with AI, it’s reached the purest state it could possibly be. In fact, when we breathe it, we get superpowers! Freaking awesome! We need to tell everyone to jump in and snort up all this water, right?!

The problem is the kids that will be our future don’t know how to breathe water yet. We never required them to go through the mundane training needed to evolve this ability. They still just breathe air. So, what are we really doing when we tell them all to jump in and inhale a whole bunch of this glorious liquid productivity? We’re sending them to their figurative death is what we’re doing. They are going to drown in memes and deep fakes and Sora videos, rather than know how to use this technology to deliver the elevated, utopian future sold to us by our tech overlords.

And that brings me to the broader point of this post.

What works for me may not work for you

I think about how many hours I have spent listening to podcasts about running, training, nutrition, sleep, meditation, life hacks, and on and on. There is always someone out there willing to share their story and tell me some miracle that happened to them because they did XYZ. And then, I do that thing, and usually don’t get the same result. But then I just go back and find some other advice from some other random person and try that on for size.

What if all these people (many of which are making lots of money selling their brand or whatnot) are just outliers? What if they are all examples of survivorship bias? I am listening to one of the few people who found XYZ to be life changing to such an extreme degree, and then letting my flawed brain assign strong causality to this randomness. And then on I go from one outlier to the next trying to stockpile all the stuff they do onto my daily checklist, which just ends up stressing me the fuck out.

Screw that!

Look, I am not saying I am going to throw out all the rules on living a healthy life and just devolve into hedonism. Actually, it’s the complete opposite. What I’m proposing is that maybe I need to pay more attention to my life, and what truly nurtures me and what doesn’t. Oh yeah, and quit listening to lifestyle podcasts and just live my freaking life based on my own experience on what nourishes my soul.

Is AI an existential crisis?

Anyway, back to AI to close this out. We really need to think hard about what’s going on here. We need to use our “Me,” and not our “me,” to think about this (read my last post if you don’t know this reference).

My “me” loves AI because it makes me a faster and more effective data analyst/researcher, which gives me the potential to make more money (cause you know “me” is all about the Benjamins!).

Meanwhile, my “Me” is absolutely terrified of AI. The gains that we individually are able to achieve seem to be inconsequential to the risk to the next generation, which is very much a part of “Me.” Hopefully I am just missing something, but as I have written before, what is humanity without creativity? And how do we build creativity without putting in the mundane work when we are young?

I don’t know the answer here. But now that I have journaled all my deepest and darkest fears on this existential crisis, I will do my best to let it go and get back to playing with data that is equally as disturbing. ✌🏼

Keep reading

No posts found