Skip to Main Content Skip to Footer Toggle Navigation Menu

Innovation Fellows in California

AI and the Liberal Arts: Lessons from the Innovation Fellows’ California Experience

By Allan Martinez Venegas, Director of Entrepreneurship & Innovation

In January 2026, the Department of Entrepreneurship & Innovation brought together 5 faculty members* from across disciplines for a trip to Silicon Valley, as part of our Innovation Fellows project. There, we met with leaders from Google, LinkedIn, Stanford University, and more to discuss innovation, entrepreneurship, design thinking, and Artificial Intelligence. This trip, beyond exposing participants to new technologies or companies, surfaced deeper questions about learning, expertise, creativity, and the evolving relationship between humans and machines. Across conversations, site visits, and reflections, several themes emerged that challenge familiar narratives about AI and higher education.

What became clear is not a set of prescriptions, but a set of orientations, and some core insights: 

1) AI Fluency is Becoming a Core Skill for Today’s Students

One of the most consistent realizations from the trip was that AI is no longer a speculative or peripheral technology. It is rapidly becoming embedded in professional practice across industries, roles, and disciplines. For students, this does not merely mean knowing that AI tools exist. It means developing AI fluency: understanding what these systems can and cannot do, recognizing their limitations, learning when usage is helpful versus harmful, and exercising judgment rather than deference.

Avoidance is not a durable strategy. Graduates will enter workplaces where AI is assumed, expected, or quietly integrated into everyday workflows. The educational challenge is therefore not whether students encounter AI, but whether they learn to engage it effectively, critically, and responsibly.

2) In the Age of AI, the Liberal Arts Matter More, Not Less

Paradoxically, the rise of generative AI reinforces the enduring value of liberal arts education. When machines can generate competent text, images, summaries, and code, the differentiating human capacities shift toward working effectively with others through challenge and disagreements, discerning and judging value (of sources, of opportunities, of decisions), navigating ambiguity, and creating self-driven structures that lead to results with limited guidance.  

In short, fluent output is increasingly cheap. Sound judgment is not. Participants repeatedly observed that AI systems amplify the importance of asking good questions, evaluating sources, detecting weaknesses, and understanding consequences. These are precisely the intellectual habits cultivated by humanistic and interdisciplinary education. Rather than rendering liberal arts skills obsolete, AI may be making them more visible and more necessary.

3) AI is Not an On/Off Switch — It Is a Dial

A particularly useful metaphor that emerged during the trip was the idea that AI is not something one simply “uses” or “doesn’t use”, or like a lightswitch, but as a dial that one can use to varying degrees. Different contexts call for different levels of assistance, augmentation, and independence: brainstorming versus final synthesis, exploration versus evaluation, drafting versus decision-making, and learning versus automation. 

This framing avoids two unproductive extremes:

  • Treating AI as inherently corrupting to original thought
  • Treating AI as inherently authoritative

Instead, it emphasizes agency. The question is not whether AI participates in knowledge work, but how much, when, and under what norms of transparency and responsibility.

More Questions Than Answers

Ultimately, there is no single right answer for how to integrate AI into learning or work, and the absence of certainty is not a failure of clarity but a defining feature of technological transition. In partnership with the faculty fellows, E&I understands that what matters is the willingness to experiment thoughtfully, to try small and reversible changes, and to remain guided by the central purpose of education itself: cultivating understanding, discernment, and the capacity to engage a complex and evolving world. 

As AI tools increasingly influence professional life across disciplines, we encourage faculty to consider how these technologies are already showing up in their classrooms, how students may encounter them after graduation, and how we can help learners develop critical, ethical, and intentional ways of using them. If you’re curious, wrestling with questions, or simply interested in comparing notes, we warmly invite you to reach out to the fellows, the AI working group, or the Entrepreneurship & Innovation team. These conversations and shared experiments are exactly how we, as a community, make sense of moments like this.

*Faculty Fellows:

From left to right: Phillip Rivera – Biology; Chris Wells – Environmental Studies; Lauren Milne – Mathematics, Statistics, and Computer Science; Julia Chadaga – Russian Studies;  Joslenne Peña – Mathematics, Statistics, and Computer Science.