Generative AI & Assessing Readiness

Generative AI refers to a category of artificial intelligence capable of creating new content, such as text, images, or concepts, by identifying and synthesizing patterns from the data on which it has been trained. Unlike traditional AI, which primarily analyzes data or provides static predictions, generative AI produces adaptive and creative outputs that can respond dynamically to the context in which it operates.

Generative AI creating

This adaptability positions generative AI as uniquely suited for assessing and fostering readiness. Readiness extends beyond merely meeting predetermined benchmarks. It involves recognizing and cultivating individual potential in ways that are both personal and contextually appropriate. Generative AI can achieve this by tailoring its outputs to align with a learner’s specific needs and preferences, engaging them more effectively than generalized approaches. Furthermore, its ability to integrate insights across disciplines allows for a more holistic understanding of readiness, addressing cognitive, emotional, and contextual factors simultaneously. Unlike traditional systems, generative AI evolves in real time, refining its interactions to remain responsive and relevant.

This post will examine how generative AI can redefine our approach to readiness, transforming the way we identify and nurture human potential while addressing the limitations of conventional systems.

Fostering Growth and Equity in AI

If generative AI (GEN AI) begins to judge and make decisions about users readiness, there’s a risk of reinforcing narrow ideas about their potential. For instance, a student who initially struggles in math might be prematurely categorized as “low aptitude,” cutting off opportunities for growth. This is a concern I relate to personally.

When I was a teenager, I struggled with math and often felt like I just wasn’t built to understand it. But years later, at my twenties, when I set up a goal to buy my first apartment, something shifted. Suddenly, concepts like interest and compound interest began to make sense, not because they were presented in a classroom, but because I could tie them to something deeply meaningful to me. That goal gave the abstract concepts a purpose, and for the first time, I found ways to understand ideas that had once seemed inaccessible.

It showed me how much learning depends on context and personal connection, something traditional education often cannot provide. Just remembering this memory of buying my first own home from 7 years ago, makes it clear how these life events have pushed me to educate myself beyond traditional school systems. Something I now wish I could have had access to sooner. This timeline underscores that readiness and motivation can change drastically across different life stages.

GEN AI could be a solution, but only if it’s designed ethically. Algorithms must reflect diverse backgrounds, with transparent mechanisms for human review, ensuring no one is constrained by a single definition of readiness. For example, a generative AI system should recognize that a student struggling with math today might thrive tomorrow when concepts are framed around their goals or interests. Bias audits, culturally inclusive training data, and flexibility in assessments are critical to make sure GEN AI serves as a guide. The ultimate goal should be a system that adapts to the learner, much like my experience with learning through my apartment-buying journey. Generative AI should help people discover their own pathways, unlock their potential, and overcome the rigidity that often limits traditional systems.

Note on Terminology: From this point forward, I’ll refer to “Generative AI” simply as “AI” to keep the text flowing smoothly. Whenever you see “AI,” in this post, it specifically refers to the generative model, which exists within the broader field of artificial intelligence.

Here we go!

Potentiality and Actuality

Aristotle’s ideas show that achieving one skill often reveals new possibilities, ones we may not have even considered before. Learning is rarely linear, each milestone can open entirely new pathways. For example, mastering a second language might lead a learner to explore literature, cultural diplomacy, or fields they never imagined.

In my case, my interest in AI and philosophy started as a way to explore ideas I was passionate about. I’ve always been fascinated by systems, especially the contradictions within them. But it wasn’t until I began using AI to track my patterns and connect concepts that something surprising happened. AI flagged that I thrive in systems thinking. It was a term I hadn’t even heard of before, let alone associated with myself. That discovery changed how I saw my abilities and opened up new areas for me to explore, from systems design to interdisciplinary problem-solving.

Artificial Intelligence unlocking potential

This is where AI becomes more than a tool, it becomes a partner in discovery. By analyzing a student’s achievements, interests, and patterns, AI could suggest not just the next logical step but entirely new directions. For instance, real-world adaptive learning platforms already personalize tasks based on engagement levels and performance metrics. Future iterations could extend this personalization by analyzing emotional cues or motivational patterns, guiding learners toward topics they didn’t realize could spark their curiosity. While this capability is still evolving, it’s grounded in existing work on sentiment analysis and learner analytics. In Aristotle’s terms, the “potentiality” here is the latent readiness learners have. When AI helps them move from potentiality to actuality, it expands their sense of what’s possible, similar to how my discovery of “systems thinking” opened up new paths. The framework isn’t just a side note, I feel it’s central to how humans (and, by analogy, AI) can move towards actualized states.

Emotion and the Complexity of Readiness

Readiness isn’t just about cognitive ability. Emotional states, confidence, curiosity, and even anxiety shape how people learn and grow. Life events, especially unexpected or challenging ones, can deeply affect a learner’s ability to focus and engage.

A situation where a student’s mind is preoccupied with something going on in their life. Whether it’s a personal worry or an ongoing challenge, their ability to absorb new information might feel blocked. By analyzing subtle cues, like a student’s hesitation or slowed progress, an AI system might detect that something isn’t right. By recognizing their distraction and adapting their lessons to incorporate the very thing occupying their thoughts. This approach wouldn’t just engage the student, it would also provide a constructive way to process their emotions through their studies.

Instead of forcing the student to push through, it could ask gentle, open-ended questions: “Would you like to take a break?” or “Is there something on your mind that you’d like to explore?” By shifting the focus temporarily to what’s occupying the student’s thoughts, the system could help process those emotions, creating space for learning to resume. However, interpreting emotional signals reliably is no simple task. Current AI can detect surface-level sentiment (like positive or negative tone in text) but struggles with deeper nuances, especially across cultures. Enhancing this capability would require careful data gathering, transparent user consent, and robust privacy measures. For instance, any AI that tracks emotional readiness must do so in compliance with data-protection laws and ethical guidelines around personal information.

By weaving real-life situations into learning, the system could help students make progress even during difficult times. Instead of feeling overwhelmed or disconnected, they might discover that the challenges they face can become part of their growth. This way, learning doesn’t feel like an added burden but a meaningful way to navigate and understand the world around them.

Time as a Factor in Growth

Readiness isn’t something that appears all at once, it unfolds over time. Growth is rarely linear; people plateau, backtrack, and sometimes surge forward in unexpected ways. These rhythms are natural, shaped by life circumstances, emotional states, and the complexity of the skills being learned.

How AI boosts learning timeline

An AI system could provide unique insights into these patterns, tracking not just where a learner is now but how their understanding evolves over time. By recognizing moments when progress slows or accelerates, the system could adapt its guidance accordingly. For instance, during a plateau, it might offer encouragement, reinforce foundational skills, or introduce new perspectives to reignite curiosity. During a surge, it could accelerate challenges to sustain momentum and deepen engagement. This adaptability would emphasize the value of patience in learning. Growth isn’t always immediate, and AI systems could help both learners and educators understand that readiness is a dynamic process. By respecting these natural rhythms, the system could foster confidence and resilience, reminding learners that progress unfolds in its own time, and that’s okay. Ultimately, by treating time as an active ingredient in growth, AI could move beyond static snapshots of readiness. Instead, it would provide a richer, more nuanced view of development, one that aligns with the fluid nature of human potential.

Cultural Narratives and Support Systems

Readiness isn’t just about the individual; it’s also shaped by the expectations and support systems around them. I remember a classmate from elementary school who was incredibly talented in mathematics and fascinated by space. His father encouraged him to explore these interests at home, giving him opportunities that weren’t always available in the classroom. This early support became a catalyst for his full potential, and today, he works in a related field.

The cultural mosaic of AI and humans

But not every child has access to this kind of encouragement or resources. Many parents, while deeply caring, may not have the capacity or expertise to guide their children toward such advanced topics, especially if those interests seem beyond what’s traditionally considered age-appropriate. How many children might miss out on discovering their potential because the world around them isn’t equipped to nurture it?

This is where tools like AI could play a transformative role. By identifying and nurturing a child’s unique interests and abilities, regardless of external expectations, AI could help bridge the gap that often exists between a child’s potential and the support they receive. Culturally responsive AI, for instance, might incorporate multilingual resources, region-specific examples, or culturally relevant curricula. This would ensure that children from diverse backgrounds see themselves reflected in learning materials, addressing a common equity gap. Instead of relying solely on a parent’s or teacher’s knowledge, the system could step in to provide age-appropriate yet advanced opportunities, unlocking readiness that might otherwise go unnoticed.

Reflecting on AI’s Own Potentiality and Actuality

Aristotle’s framework of potentiality and actuality isn’t just relevant to humans, it can also be applied to AI itself. Today’s AI systems are powerful, yet limited. They operate within the boundaries of their training and design, showing impressive capabilities in certain areas while struggling with nuance and context. In this sense, AI is still in its potentiality phase: full of promise but far from fully realized.

The journey toward AI’s actuality involves refining its ability to understand the complexities of human states, emotions, and cultural contexts. For example, an AI that currently detects readiness based on academic performance could mature into a system that considers emotional resilience, creative thinking, and even latent potential. Over time, AI might evolve to guide learners and individuals in ways that feel intuitive, adaptive, and deeply personal.

This process is iterative, requiring constant recalibration. Just as human growth involves feedback and self-awareness, AI’s development must rely on input from diverse users, ethical oversight, and rigorous testing. Implementing community-based audits, where educators, parents, and students can weigh in on how the AI makes decisions, would help mitigate hidden biases. This approach gives real-world checks on AI’s progress toward ‘actuality.’ In doing so, the system could not only become more effective but also more aligned with the values and needs of those it serves.

Ultimately, as AI continues to actualize its potential, it may uncover new forms of human growth that we’ve yet to imagine. The challenge lies in guiding this evolution responsibly, ensuring that AI doesn’t just mirror human intelligence but becomes a tool that expands our understanding of what’s possible.

Technical Feasibility and Current Limitations

While the vision for AI-powered readiness assessment is compelling, there are significant technical hurdles to address before it can become a reliable, universal tool. These limitations aren’t insurmountable, but they highlight areas where further development and care are needed.

First, detecting emotional states, a key aspect of readiness, is still an imperfect science. While AI can analyze tone, hesitation, or engagement patterns, its ability to interpret these cues accurately is limited, especially across diverse populations. Misinterpretations could lead to incorrect assessments or responses, potentially undermining a learner’s progress. In practical terms, even advanced natural language processing systems can misread sarcasm or cultural nuance, underscoring the need for ongoing refinement and a cautious approach to emotional inference.

Scale demonstrating the choices we need to make with AI

Bias in data is another major concern. AI systems learn from the information they’re trained on, and if that data reflects societal biases or lacks diversity, the system may unintentionally reinforce inequities. Ensuring fairness requires careful selection, curation, and continuous auditing of training data. Developers might consider transparent model “explainers” that show why certain recommendations are made, allowing human reviewers to spot bias more easily.

There’s also the challenge of detecting hidden factors affecting readiness. A student struggling with external stressors or latent talents may not fit neatly into an algorithm’s current capabilities. AI must be sophisticated enough to identify such complexities without oversimplifying or overstepping its role.

Additionally, while AI systems excel in specific domains, they often struggle with cross-disciplinary adaptability. For example, a system may be effective at teaching math but less adept at suggesting creative or interdisciplinary applications, which are often essential for real-world learning and growth. Even with adequate data, certain gaps remain. There may be areas where data is insufficient or outdated, especially for underrepresented groups or unique learning contexts. AI developers need strategies to address these gaps, ensuring that systems are inclusive and adaptive without relying on incomplete information.

Furthermore, transparency is critical. Users need to understand why the AI flagged readiness or suggested a particular approach. Without clear explanations, trust in the system could erode, reducing its effectiveness and adoption.

Finally, the practical constraints of hardware and accessibility must be considered. Not all schools or learners have the resources to implement advanced AI systems, particularly in under-resourced settings. Solving these issues requires a balance of technological advancement and equitable access. Governments, nonprofits, and private institutions might collaborate to subsidize or distribute AI tools so that economic disparities don’t widen the learning gap. This aligns with the broader ethical goal of maximizing inclusivity.

While the development of AI is advancing rapidly, we’re not there yet. It may take another decade or more to refine these systems to a point where they can reliably support readiness assessment on a broad scale. The journey ahead is both exciting and challenging, but with the right focus, these hurdles can be addressed to make AI a transformative tool for learning.

Addressing Criticism and Looking Ahead

Not everyone will embrace the idea of AI detecting readiness or guiding users toward new skill paths. Critics may worry about hidden biases or unintentional assumptions within AI systems. For instance, if an AI misreads a user’s emotional state or misunderstands their readiness, it might offer inappropriate challenges or interventions, potentially frustrating the learner or limiting their ability to grow. This highlights a central tension: we want AI to detect hidden talents without “deciding” who’s talented or not. Striking this balance requires clear guidelines that keep AI in an assistive role rather than a gatekeeping one. This raises concerns about whether AI could inadvertently stifle individuality by placing learners into predefined categories rather than supporting their unique journeys.

Transparency is another challenge. Trust in AI depends on users understanding how its decisions are made. Without clear explanations for its recommendations, the system risks alienating the very learners and educators it aims to help. There’s also the potential for new inequalities, as not all communities have equal access to advanced AI tools. Uneven distribution of resources could further widen existing gaps in education.

However, these challenges don’t diminish the potential of AI in finding our potentialities and actualizing them. Instead, they highlight the need for careful and ethical design. AI should amplify the work of educators, not replace them. By providing tools that adapt to individual needs and actualize their potential, it can bridge gaps that traditional systems leave untouched, identifying readiness in ways that feel personal and empowering.

As we explore these possibilities through the lens of Aristotle, it’s essential to approach this future with humility. Current technology has its limits, and over-reliance on AI risks losing sight of the human connection at the heart of education. Yet, AI’s ability to offer nuanced, personalized guidance represents an opportunity we can’t ignore. This isn’t about perfection, it’s about partnership, where AI complements human insight to create richer and more inclusive learning experiences.

In the next blog post, I’ll explore how such a system might be designed. This won’t be a final blueprint, but rather a starting point for imagining what’s possible. By diving into potential architectures, privacy safeguards, and real-world pilots, we can begin to see concrete pathways for AI-driven readiness assessment.

The hope is that these ideas spark collaborative discussions among technologists, educators, policymakers, and students themselves. Together, we can think critically about the tools we build and ensure they serve as partners in fostering growth, readiness, and potential in ways we’ve only begun to understand.



Warmly,

Riikka

Previous
Previous

Manifesto I: Living Authentically

Next
Next

Framework Reflections nro 1