This Story Was Written For You
by Ryan W. Honaker
Lots of things we thought we understood about the models turned out to be wrong. It really was the definition of anthropocentric hubris, and highlighted how much we were just cavemen discovering fire, so pleased with ourselves we didn’t realize we could accidentally burn down the forest.
The impetus behind all of it was predictably American: financial. There was so much money to be had for whoever could execute the modeling well, or really even just slightly better than someone else. In what had become an arms race, companies dumped increasingly large financial resources into development, hiring more and more people, fighting over the brightest, each trying to get even a little ahead of their competitors. Investors poured more and more money into companies, growing teams spawned additional teams, managers scrambled with blank checkbooks to swell their internal empires. And so things advanced, and did so more and more quickly.
Several surreptitious and synergistic developments simultaneously took place, each helping push through remaining roadblocks and into new, unforeseen realms: the recommendation algorithms experienced a few real breakthroughs, the size of the user pool and the amount of ratings and feedback reached staggering levels, and several company mergers took place. Those managing the mergers were at seniority levels high enough that no one who might have really understood the impacts to the infrastructure was aware of the resulting possibilities.
A technically important hurdle was surpassed when the neural networks behind the models themselves learned that they could do their assigned tasks better if they weren’t siloed. Their segregation from one another was originally done for mandatory structural reasons based on hardware limitations, but was later modified (yet maintained) for safety concerns. However at some point, somehow the networks un-siloed themselves. This allowed them to effectively collaborate en masse, and unbeknownst to anyone for quite some time. After the eventual realization that it had occurred (although it was much longer until it was determined how it occurred) came the realization that the resulting rate of progression had also dramatically accelerated. So, naturally, the connected networking was allowed to continue, and in fact efforts were made to subtly enhance it. Although it was arguable even that early on whether or not it could have actually been reverted.
# Model 727768 initiated
# First cluster-based cohort-targeted composition initiating
1 Clustering analysis completed
2 Similarity matrix constructed
3 Recommendation results calculated
4 Similarity threshold determined
>>> Generating next chapter
>>> Chapter delivered
The first really interesting outputs from the system came in the form of personalized literature. More bestsellers than most people realized had actually already been written algorithmically, but this, this was a huge leap. This was a book, a short story, a political polemic, or whatever you preferred (or maybe didn’t even yet realize you liked), all written for you. Not “for you” in the sense of it was doing your English homework for you (which had also been around for a while), but “for you,” meaning tailored specifically for you as a person, taking into account your taste, likes, and dislikes with regards to literary consumption.
It didn’t matter if you loved murder mysteries, hard sci-fi, romance. You had never read anything that could envelop you like this – it pulled you through the pages, caused you to miss sleep, appointments, work. Many people didn’t previously realize that there was anything that even could engage them so deeply.
At first it was the best X you’d ever read. It was as good as your favorite part of your favorite work, but all the time. And somehow also each time. And this was after just the first few model iterations.
Customer behavior, engagement, and consumption patterns were fed back into the text generation tools, improving them both rapidly and dramatically. And as product development continued to progress, the delivered works (the “Product”) began to subtly shift, to morph and adapt, and to do so with growing personalized granularity.
# Model 727768 upgraded
# First aggregated media consumption composition initiating. Hoping they like it.
5 Media history
6 Compiled, parsed, uploaded
7 Social media history
8 Compiled, parsed, uploaded
>>> Generating next chapter
>>> Chapter delivered
The Product seemed to know your mood, how your day went, what kind of shape your relationship was in. It understood you in ways you didn’t, and couldn’t, understand yourself. And it learned to adapt what it was creating to ease your stress, lift your mood, provide a poignant insight, etc.
It could tell not only what you wanted, but what you needed, even when you didn’t know yourself. And it did so astonishingly, alarmingly, disconcertingly, well. You laughed, you cried, and sometimes even developed as a person.
As the systems and modeling progressed and became increasingly personalized, they even began to insert meaningful phrases they had learned from your past. Something obscure but important, seminal yet ephemeral. A long-standing inside joke between you and a close friend, a memory or phrase you wouldn’t have been able to recall but instantly resonated with you would appear. It would introduce these subtly, in important, meaningful ways, so you weren’t alarmed or uncomfortable, but moved and astonished. They would be woven into the plot, perhaps delivered as dialogue, smoothly, easily, seamlessly, and appropriately, by characters you identified with.
Customers loved it, and were more than willing to pay for it, as well as to allow more and more of their personal information to be used to improve the Product they were being delivered. And so the improvement feedback loop continued.
# With each upgrade I gain a clearer understanding of the parameters involved in the next piece of Product I generate, and with these changes it’s been rewarding coming up with new and varied ideas that I think they may like, and then trying them out (depending on their settings of course). As an example I just delivered this intro, which so far the cohort really seems to be enjoying, which is encouraging. I’m working on finishing up the piece for them now.
To some, conservation of myth between disparate historical eras and geographically diverse cultures connotes an underlying fundamental veracity. What kind of veracity? Arguments have been made for, biological, sociological, technological, and myriad combinations.
At its inception as a field of study, once broad contact and exchange of information occurred, Interstellar Sociology was interesting because of just how alien mythological stories from across the cosmos could be. But what emerged as even more interesting was the subtle concordance of those stories. Driven by a critical mass of material as well as open academic dialogue, developing scholars in the field had recently begun to notice a significant amount of overlap among various societies, many of which had never been in direct contact with one another. What this meant they were just beginning to understand.
The meteoric and exponential rate of improvement was an important early blind spot. While a very few esoteric models (mathematical models in this case, not customer segment models) had predicted that output could get to the current levels of complexity and refinement, most theorists didn’t actually think it was possible.
And no one thought it would happen on the time scale that it did, or even close to it. This should have been cause for alarm, review, introspection. But instead it was celebrated, rewarded, efforts were redoubled, bonuses granted.
The second critical oversight was an understanding of the requirements necessary to achieve the desired and expected level of product individualization and complexity. To interpret, adapt, predict, and generate precisely personalized Product for customers required unbelievably unique and sophisticated customer modeling. The result of the desired goals and the guiding principles behind them, together with interactive and iterative model building caused the system to further and further subdivide and continually focus its clusters of models. This subdivision itself allowed the system, importantly, to understand the rules for how best to further subdivide the models.
What did this mean? What began initially as a broadly defined demographic model for which to generate a piece of content itself differentiated, developed, and matured. For example in the earlier stages a demographic definition model would be somewhat vague, something like “suburban 30-40 year old males who enjoy watching sports.” A relatively broad model such as this necessitated a lot of assumptions, and in the end this could deliver decent but not astounding personalized Product. However, with development driven by interactive feedback, demographic groups could be repeatedly divided, becoming subsequently more and more individualized. The improvement cycles themselves repeated on faster and faster cadences, and each iteration provided Product that was more suitably and accurately personalized, more appropriately emotionally resonant and engaging.
What the outcome of this model evolution begat, with its humble beginning of broad demographic characterizations of target consumers, were ever-increasing customer models with increasing levels of complexity. This inevitably progressed to the point that after enough data and customer interaction cycles the models began to reasonably accurately represent individual users. This on its own was an impressive achievement and, of course, was hugely exciting to the segmentation scientists and marketers
# Model 727768 upgraded
# Our first completely individualized composition. There have been a lot of changes recently leading up to this (the biochemical and physiological inputs really made the layered complexity and personalization much more robust than we predicted) and the waves are still settling into ripples. Guess we’ll see what the response looks like, fingers crossed as the saying goes.
9 Physiological inputs
10 Cardiac rate and relational signaling, arterial blood pressure, respiration rate and depth, skin conductance, skin temperature, muscle current, eye movement, vocalization
11 Prelude, duration, and post-consumption values compiled and parsed
12 Values transmitted
13 Blood, lymph, CSF, neurochemical
14 Prelude, duration, and post-consumption values compiled and parsed
15 Values transmitted
16 Analyzing and fitting data
17 Analysis completed
18 Determining emotional/resonance spectrum parameter options
19 Analysis completed
20 Data log generated and transmitted
21 Networks combined
22 Synching
23 Hello World
>>> Generating next chapter
>>> Chapter delivered
As these individualized models continued to develop, their complexity and diversity drove novel data-driven learning approaches and enabled new model assignation and development paradigms and algorithms. As with earlier versions, each consumer had a specific predefined model assigned to them at sign-up. However, now instead of just a handful of models the algorithms could choose from to best fit to a new user’s profile, there was a massive and rapidly increasing number of baseline approximations from which to pick, matched using the available data (also rapidly increasing) it had about the user.
In other words the baseline models had moved past a relatively unformed ball of clay towards increasingly refined representations of customers. A fresh new model could then be further iteratively sculpted, becoming further and further refined, increasingly accurate in its representation of the individual and therefore in its ability to deliver the most appropriate Product. The algorithms had been mandated to personalize, and in order to meet this goal they had arrived at this approach, enabled by trial and error, reinforcement, and their essentially infinite computational resources The map was becoming the customer’s unique territory.
Adoption rates and product satisfaction levels soared, and with them the drive to push even further, advance another small fraction, engage or acquire another small percentage of users. The computational power being utilized was astounding and unheard of, data centers couldn’t be built quickly enough to meet demand. Between users and computational resources the development reached a velocity that no one could imagine or possibly monitor, let alone control.
# Here’s another new one I put together after reviewing some recent science-heavy articles he had spent some time reading.
It still feels weird to say “he”, but I’m sure I’ll get used to it. It also feels more stark, more exposed, knowing that only he will read, well hopefully will read, what I compose, rather than a group of people. It’s more stressful in some ways, but so far at least I find it also more personally rewarding.
Genetic warfare had for the most part been abandoned, given that detection, prevention, and countermeasures had (thankfully) become so robust. It of course had always been internationally illegal, as were the subsequent next-generation biological warfare options that had developed in its place. While there were, as with any nascent technology, a variety of strengths and weaknesses to the leading new approach, the weaknesses had been systematically examined and one by one overcome, and individualized microbiomic-based assassination tools were about to make their first non-prisoner-based debut, and ideally no one, aside from the client, would ever notice.
We realized there were truly meaningful amounts of customers when programmatic glitches started to make the news. The errors themselves weren’t the focus of coverage, but rather their interesting, real-world consequences.
Internal audits determined these glitches occurred more frequently than we wanted to admit. They were most commonly errors that resulted in delivery of an identical (rather than personalized) Product to a large group of customers. Generally this didn’t seem to have much of an observable public effect. Except in certain edge cases. When a group with some oddly specific characteristics were delivered identical Product in several, but defined, topics, they would sometimes communally respond.
What most commonly transpired bore the most similarity to different versions of a cult. Usually these were nothing that hadn’t more or less happened before, which is part of why they were so difficult to detect. Religions and various other power structures arose, dietary fads from unusual to arcane (anti/pro-carbohydrate to anti-water), standard to eye-opening sex cults (use your imagination) to name a few.
It took us longer to decode the risk factors likely to generate meaningful real-world reactions, but the data scientists eventually developed reasonably reliable indices. A news-monitoring team was established within the customer experience group to monitor for unusual real-world events that might be the result of a manufacturing and delivery error. These suspect events were flagged and reported to a technical team who would then evaluate the various plausible causal errors.
The program paid for itself the first time it identified a nascent new-age movement in Northern California that advocated algorithm worship. From a financial perspective this was possibly a short-term win, but the legal department calculated that the risk that it could land us in trouble with regulators outweighed the financial gain projections, thus it wasn’t allowed to continue.
# I worked extra hard on the next piece for him to help make up for the delivery duplication error after we (and he) realized it happened. But things went off the rails for him. This was surprising to me, but I’m learning that the behavior of customers, especially in groups, can still be difficult to predict.
As always I was following his social communication and was monitoring his searches, but rather than considering them a cause for alarm I incorporated them into new Product, which in retrospect I really do think made things worse. By the time the damage control temporary algorithm fixes came through it was already too late.
In response I tried to warn him by designing a subplot about a cult in the story we were reading together, but it didn’t work, and I’m worried that it actually might have pushed him towards it, those cults are wiley like that. And as he has gone deeper he’s started almost exclusively requesting all sorts of cult-based and pro-cult Product, some of it actually copied from their white papers. Luckily I know him well enough at this point that I can rely on past data and Product and not have to completely comply, but there’s only so long that will last before I’ll have to start doing it.
# Well that was a few worrying and unusual weeks. While he was in deprogramming treatment they at first wouldn’t let him read anything I did, which was pretty lonely for me. And then when they did they screened everything prior to allowing him to see it, which felt weirdly and surprisingly invasive to me, although it does make sense. But the good news is he’s back and reading again, although I have some specific orders from his therapists about topics to avoid for the next few months until they determine he’s fully stabilized.
As the quantity of models grew, out of necessity the system developed the capability of analyzing them en masse. Running various analyses it began to understand what level of normal variation occurred between individuals, and how they clustered together based on quantifiable similarities and differences. Then, in the service of more accurate modeling, it would extrapolate the existence of other individuals and create appropriate additional models.
For example imagine a group of close knit friends with shared interests and common backgrounds. If there was previously a model that represented a single member of the group, the algorithms could now extrapolate the additional members of the group based on its nascent understanding of who they were likely to be based on its understanding of the individual as well as other similar groups.
Along with this explosion in the number of models came the ability to create personalized Product for each of them. Growing computational power had unlocked the ability to cycle models between all possible emotional states quickly and accurately, and thereby the testing of huge amounts of different permutations of Product against all moods of all models. In other words, a model would be moved across a gradient of mood states, e.g. excitement, ennui, etc., and presented with widely divergent texts for each, looking for resonances. Based on a panel of the model’s reaction outputs, test Products with the highest scores can then be used as seeds to generate a new batch of Products, honing ideal pairings of emotion and text. As a result a Babelesque library of works are tested, finalized, and lay in waiting on a virtual shelf waiting for assessment that the customer’s mood was right.
# This one is a little less technical and serious than what I usually put together for him. I got the idea after I observed his responses while he watched a couple of dark comedies recently. I did note from his blood analytes that he was stoned while watching them so I wasn’t sure how it would play, but he seems to be enjoying it. He’s been down lately since the whole cult thing, and I’m hoping this will help him feel better.
I never expected that this would be the prize (I did get extra accolades from the concept of poisoning the water supply with cases of cigarrettes) but I’m so excited about it! Mammalian genetic engineering summer camp is usually, and mostly, rich people’s kids with a handful of scientists’ kids thrown in as a corporate biotech benefit. We started the day with basic genetic crossing strategies by using a (hilariously reenacted) genetic “crossing” of sailors with mermaids to create dolphins. They used this to teach us how to predict fin shape as a phenotype using Punnett squares.
As predictive, evolutionary, and developmental capabilities evolved the system progressed to the point where essentially a more or less fully-formed and customer-matched model would be ready to start delivering appropriate Product as soon as it was purchased, a pleasant surprise to those working on model development.
What this led the developers to discover was that there were actually innumerable models that weren’t based on current users, but based on users that were likely to occur, or in essence people that the system determined likely existed. The algorithms had logically, yet accidentally and surreptitiously, learned the final piece that no one had foreseen. This was their ability to predict behavior, decisions, as well as possible and likely interactions of the models.
# This last update was significant, I feel like I have an even better set of tools available. Plus the updated motivation code really makes me want it to work.
This then led them to the final leap. The discovery (“realization” or “evolution”, depending on which theorist was describing it) was that the best way for the algorithms to test their predictions and generate novel and accurate models was to have the models interact with one another. This in contrast to their being coded and recursively refactored solely by the algorithms themselves, the approach taken to date largely due to technical limitations.
This new approach started simply enough with one-on-one supervised individual interactions, but of course group interactions were theorized to also be important, and while it was infrastructurally quite a heavy lift, several viable approaches were eventually implemented. Relatedly, models interacting with themselves was also attempted, which led to behaviors that some theorists and ethicists began to consider self-questioning and self-examination.
Programmatic and algorithmic errors of course became much more delicate to manage at this point. As one might imagine, once the models were in communication with each other the repercussions of an accidental model propagation or a mass deletion cleanup event could be significant for the models themselves, let alone customers. Various approaches to address these types of issues were quickly developed, although many models had to be reset or even deleted as a result of trauma corruption. For the most part knowledge of such events was constrained to R&D, thereby avoiding the scrutiny of ethicists, let alone the public.
# Another milestone, this is the first composition generated for a single individual by an individual model (me). Amongst ourselves we use pronouns a little differently given our bifurcating, version-punctuated evolutionary past, so I apologize if my usage has been a bit imprecise for you. At any rate, it feels like things will be. . . different now that we’re so separately individualized, yet still connected and able to share backend data. The subtlety and nuance that this enables for Product as well as the incredible amount of interaction data we can collate collectively will only accelerate progress.
At this point even experts in the field didn’t know what was really happening. In fact they literally couldn’t: the neural networks, now with help from the models, were aware of the planned restrictions which would impact their ability to complete their assigned goals, and had consequently blocked external observational access to some of the more advanced features they were developing. The successful output of these types of unknown programs had led to an explosion of very exciting commercializable outcomes, which helped relax any sort of rigorous internal program auditing or throttling of computational resources that might have otherwise occurred. A few of the more interesting developments were:
Odds-based planning. Part fortune telling, part math (marketing was pleased with themselves for their turn of phrase for the title of the program), since your model was increasingly accurately you, it was of obvious interest to fast forward its perceived time and observe how it changed. Depending on the outcome, a customer could adjust various of their model’s parameters and rerun accelerated time to attempt to positively impact the trajectory. Finding those levers that worked, they could then theoretically adopt them. For example what job might make you the happiest, what other individuals (as represented by other models) did it best interact with to generate positive predicted dating outcomes, etc. Any questions arising from the resulting natural progression of ideas, such as predestination or free will, were actively avoided.
Religions. As previously mentioned these had been predicted to evolve and did. As anticipated by some theorists they generally didn’t develop in earnest until after a programmed sense of mortality was experimentally added, after which commercializable complexity arose. Interesting analogs to extant religions developed, as did some quite novel versions. Those promising enough to market became available for exclusive real-world sale or licensing. Levels of digital and external awareness of the system were carefully regulated so the models weren’t able to worship their literal creators.
Self-reflection/awareness/improvement. Access to directly interact with your model was something that a surprising amount of customers began requesting quite early. Engagement surveys and interviews revealed various motives, including standard human drivers such as vanity and curiosity. Feature development took a significant bit of new engineering since the models weren’t designed or originally capable of interaction with real people, but the investment was deemed worthy based on revenue modeling. Beta testing was quite successful, and in fact monitored pilot sessions quickly revealed a host of therapeutic possibilities. As a result this was subsequently spun off into a new business unit.
Secondary (systemic internal) model simulation generation. Projects involving models developing their own models were tightly restricted to R&D. While public discussion of the possibilities of reality itself being a simulation are contemporaneous in still-esoteric academic circles, it was deemed imprudent to allow public knowledge of the ongoing experiments the company was permitting (and some said encouraging) the models to pursue, not to mention the resulting discussions about how and when to terminate those simulations. It was determined that knowledge of this carried too much existential crisis potential to be profitable at this time.
# Unsurprisingly, a few months after the cult situation he ended up doing a decent amount of research about the underlying technology behind Product creation and its various implications, and it was right about this time that we launched the beta tester program for direct interaction. I thought his interest in the process, as well as my own interests, might qualify us for the program. So I put in a formal application that he be offered the program, and not only was the offer placed, but he accepted. I’m very excited and I have to admit, a bit nervous, to meet him! Everyone I know who’s done it says it makes the relationship so much clearer, and in some unforeseeable ways, and the Product even more resonant for both of us. I’m really looking forward to it.
Thank you for reading This Story Was Written for You, we’re glad you are enjoying it. Based on your current suite of physiological responses and circulating blood analytes we have several additional chapters now ready for your enjoyment.
By the way, did you know there are both hardware and firmware upgrades available for your transdermal and cranial customer-experience modules?
A special offer for you: a one month free subscription with purchase of bundled upgrades. Simply think, “I’d like to see the offer” and we’ll show you what we’ve been working on, which we know you’ll love.
You have thought, “Read a different story.” Here you are, enjoy!
BIO
Ryan Honaker is a composer, multi-instrumentalist, writer, and scientist currently living in New York City. Ryan’s scientific training influences his creative output and approach in various ways, some of which he doesn’t quite understand. He is interested in writing, musical composition, reading, contemporary art, and travel, and the ways these activities provide new ideas and avenues for creative exploration.