April 8, 202600:39:41

Marshall Kirkpatrick on cognitive levers, combinatorial possibilities, symphonic thinking, and compound learning (AC Ep39)

“The technology we’re working with today really makes a lot of those best practices and mental models and the whole toolkit more accessible than ever to more people.”

–Marshall Kirkpatrick

About Marshall Kirkpatrick

Marshall Kirkpatrick is founder of sustainabilty consultancy Earth Catalyst and AI thinking tool What’s Up With That. His many previous roles include founder of influence network analysis tool Little Bird, which was acquired by Sprinklr, where he was last Vice President Market Research.

Webiste:

earthcatalyst.co

whatsupwiththat.app

marshallk.com

LinkedIn Profile:

Marshall Kirkpatrick

What you will learn
  • How generative AI transforms cognitive tools and lowers barriers to advanced thinking
  • Techniques to combine human and AI-powered sensemaking for richer insights
  • Practical strategies for filtering and extracting value from infinite information
  • The importance and application of diverse mental models in modern decision-making
  • Methods to balance manual cognitive work with AI assistance for optimal outcomes
  • The role of adaptive interfaces in enhancing individual cognitive capacity
  • Metacognitive approaches to networks and how AI can foster organizational awareness
  • Ethical and societal implications of democratizing access to AI-powered cognitive enhancements
Episode Resources Transcript

Ross Dawson: Marshall, it is awesome to have you back on the show.

Marshall Kirkpatrick: Oh, thank you, Ross. It’s such a pleasure to be reconnecting with you here. Thanks for having me on.

Ross Dawson: So back you were very, very early on in the podcast when it was Thriving on Overload, and it was interviews with the book, and you got incorporated—some of the wonderful things you were doing in Thriving on Overload. So I think today, in this world of generative AI, which has transformed everything, including the way in which we think, the Thriving on Overload themes are still super, super relevant, and in a way, we need to be talking about them more.

That theme at the time was finite cognition, infinite information. How do we work well with it? I don’t know if our cognition has become more finite, but the information has become more infinite, and there’s just more and more. But also, it cuts two ways, as in, what is the source of all the information? AI is also a tool. So anyway, let’s segue from some of your cognitive thinking tools, technology-enabled cognitive thinking tools and so on, which we looked at. So how do you—where are we? 2026, what do you think about human cognition in our current universe?

Marshall Kirkpatrick: Well, especially when you frame it up in Thriving on Overload terms. I mean, those were four, five long years ago that we last spoke, and the book that came out of it was just fantastic. I think it has some timeless qualities, and I think that the technology we’re working with today really makes a lot of those best practices and mental models and the whole toolkit more accessible than ever to more people. That’s what I hope.

I think that, yeah, between individuals and organizations, there’s so much that, historically, someone like you or me or the people closest in our networks were willing and able to do and excited to do, that many other people said, “That sounds like a lot of work.” The bar is lower now, because a lot of just the raw cognitive processing can be outsourced into a technology that serves as a lever.

Ross Dawson: Well, I mean, that idea of levers for these cognitive tools is interesting. I guess, the very crude way of saying it is, we’ve got inputs into our human brain, and then we are processing information. I’m just thinking out loud a bit here, but it’s like, okay, we have tools to be able to filter, to present, to find what is most relevant, to present it to us in the ways which are most useful—very obvious, like summarization, visualization.

Then as we are processing it ourselves, we have dialog, or we can have interlocutors who we can engage with and be able to refine and help our thinking. Does that sort of make sense, or how would you flesh that out?

Marshall Kirkpatrick: Yeah, I mean, when you put it that way, it makes me think about Harold Jarche and his Seek, Sense, Share model, right? I think that AI, especially when connected to things like search and syndication and other traditional technologies, can impact all three of those stages. It can hypercharge our search. I think the archetypal example of that, on some level, feels like the combinatorial drug research being done, where just an otherwise cognitively uncontainable quantity of combinatorial possibilities between molecules can be sought out and experimented with for a desirable reaction.

And then that sensing, or the pattern recognition that AI is so good at, is something that we do as humans—some of us better than others—and it’s a lifelong muscle to build and what have you. But the AI is really, really good at it, and so it’s a ladder to climb up in some of that sensing. And then the sharing component becomes so much easier with the rewriting capabilities—turn A into B, reformat something into a summary or a set of bullet points, or ideas and words into code. AI is just so excellent for that translation that makes new levels of sharing possible.

Ross Dawson: That’s fantastic. Yeah, I had Harold on the show again in the Thriving on Overload days. But you’re right, that’s extremely relevant. Let’s dig into that. I love that you brought up that combinatorial search, which is so important. As opposed to going into Perplexity to do a search, it’s far more interesting to find the uncovered connections between things, which are relevant to what you’re doing. And that’s—

Marshall Kirkpatrick: Absolutely. I remember reading, years ago, Dan Pink’s book “A Whole New Mind,” which preceded the generative AI era. But he said, if your kind of work is something that’s easily reproducible by computers, good luck to you. You really are going to need uniquely human practices in the future, and what exactly those are, I’m not sure, because the one that he identified, I don’t think has proven to be uniquely human.

But I really appreciated learning about it from him, and that was what he called symphonic thinking, or the ability to draw connections between seemingly unconnected phenomena. So for many years, I have been doing a personal exercise with pen and paper that I call triangle thinking, where I’ll take three different phenomena—maybe that’s the owl outside my window, one of the notes that I’ve taken on paper, and something I come upon on the internet, or maybe it’s three very deliberately related things. I label them A, B, and C, and I ask, what might A have to say about B? What might B offer to A, and vice versa? I write out the six unidirectional connections between those things. And without fail, one, two, or three of those end up being real keepers, where I say, “Aha, that’s a really interesting idea. I’m going to take action on that.” And now, by the time I’ve got the letter B written out, an AI has done that ten times over. I like to do it both ways—still both AI and with my naked brain—but that combinatorial ideation, the generative combinatorial ideation, is, yeah. I’m curious what your thoughts and experience and hope for that might be.

Ross Dawson: Well, there’s a prompt I use called “Apply Diverse Thinking,” where it generates extremely diverse perspectives on a topic—who might those very unusual people to think about something be, and then what would they think about this particular situation? Of course, there are a whole array of different thinking tools. There’s Marshall McLuhan’s tetrad, which is a little bit similar to your thing where, again, you can and should do it—well, not manually. What’s the manual equivalent of brain?

Marshall Kirkpatrick: Thoughtfully, perhaps. Yeah, good one—deliberately, manually. I mean, Azeem Azhar over at Exponential View uses a fountain pen and paper and will sometimes have his team come online and they’ll do two-hour thinking sessions with no AI allowed. They just get on, I believe, Zoom, and just think through things with pen and paper, individually and together. And then they’ll kick off OpenAI or what have you, and use all the tools afterwards.

Ross Dawson: Yeah, well, a couple of things. Actually, research has shown that in brainstorming, it is better for everyone to ideate individually before doing it collectively. And of course, that’s unaided.

I think there are analogs there where—actually, one of the frameworks I just released last week was basically to say, think it through for yourself before you ask the AI, because then you have a reference point. If not, you don’t have a reference point to say, “Well, what am I expecting it to do? Let me think it through for myself,” even if it’s just a little bit, as opposed to just going in blank—”All right, give me an answer.” Just that simple thing of thinking through for yourself first is enormous. What it does is, obviously, give you a reference point for that.

And I’m going on a lot about appropriate trust at the moment—as in, trust the AI enough, but not too much, which I think is absolutely critical capability. And part of it is being able to say, “Well, this is what I think it should be giving me.” Now you have a reference point for what it gives you.

Marshall Kirkpatrick: Yeah, that sounds great in many cases. I do think that’s the right tool for the job in a lot of places, but not necessarily all. I’m thinking of the Iron Triangle of product management—fast, cheap, good, pick two. On some level, just handing the AI the keys for certain decisions is uniquely fast and cheap, right? And maybe it’s good enough.

Ross Dawson: Oh yeah. Well, you’ve got to choose your battles, because if you’re now doing ten times what you were doing last week, then maybe for a tenth of those you can do some thinking before you delegate it to the AI.

Marshall Kirkpatrick: Yeah, a strategy for how to do that. I think, well, that sounds important—some checkpoints along the way, some random selection of testing things.

Ross Dawson: Well, that’s interesting. One of the critical things people talk about with AI model oversight is sampling. As they say, “Okay, I’ve got 1,000 outputs—I’m going to take 20 of them and check how good they are.” You’re not checking every output, but you’re doing some kind of ongoing sampling.

Marshall Kirkpatrick: Are you checking with your own deliberate brain, or are you checking with another AI?

Ross Dawson: It could be either, depends on the case—how critical it is. This comes back, of course, to the fact that accountability is only human, and so the human who is accountable has to make that decision: “All right, I’m happy for another AI to check it,” or, “Actually, I want to go in myself to see.” And that’s a judgment call.

Marshall Kirkpatrick: Totally. And it feels like a process design issue and a personal accountability matter. I mean, “The AI made me do it” is not a viable excuse.

Ross Dawson: Let’s hope it remains that way. So, good for those Seek, Sense, Share stages. Sense is one of your superpowers, both in the way you think and also the way you use the tools. It’s probably worth introducing—now you’ve just released this wonderful product called What’s Up With That. So just tell us about the product, but also, I want to go to the bigger context of sense—sensemaking, how we use it generally, how AI can use that, and your role with the tool in that.

Marshall Kirkpatrick: Yeah, you know, I think there are so many different ways that sense can be made of anything, so many different ways that anything you read or think about or do can be put into context. It’s just overwhelming. I think we all have our favorite—not all of us, but those of us who are into this have our favorite tools, our favorite ways to—you know, a lot of people will think about something in terms of its past, its present, and its future, or they will break it down in analysis into parts, or they’ll synthesize it together with other phenomena and see how to understand.

I think sometimes of the famous Donella Meadows quote, the mother of systems thinking, who said, “Systems thinking isn’t any better than analytical linear thinking than a telescope is better than a microscope.” So there’s just a superabundance of fascinating, powerful tools that all provide different views on anything we’re trying to make sense of. One of the things that I’ve always found a lot of joy and usefulness and power in is learning about new lenses and processes and tools. Now that generative AI has put the ability to develop software into my hands—instead of having to go and hire someone else to build that software—I have built a system that takes as many of those different models and lenses and processes for making sense of something as I can.

I mean, it would be trivial to pull up a list of 200 mental models. I might go visit Shane Parrish’s website and The Knowledge Project. I think of ones that would be particularly useful, like, “Tell me who the intellectual predecessors are of this thing I’m reading,” or one of the other capabilities inside of What’s Up With That—my favorite, probably, is a combinatorial one called Fertile Edges. That says, “Take what I’m reading right now, identify the topic that it is a constituent of, and then find other adjacent topics where innovative people have built bridges between those adjacent topics and what I’m reading about, and tell me who those people are.” And that’s really fun. So I have built this sensemaking system, and that’s a part of What’s Up With That.

There are really three parts to it. The first is, it analyzes whatever you’re reading or watching, and it pulls out the net new, truly novel, most notable elements. Yesterday, I was telling you, it was a little bit inspired by the US military intelligence guideline that says, when you’re writing up a report about something, focus on what’s new in that situation—tell us what we don’t already know. That’s the first thing that What’s Up With That does. It says, “All right, here’s what’s new in this document relative to its field,” because we just drew a real-time map of the state of the art, and we say, “Okay, here’s what’s really novel there.” The second thing that it does is that toolbox full of all the different mental models and lenses, and it recommends a sequence. One of my favorite books I ever read was “On Grand Strategy,” about strategic thinkers throughout history, who talks about the significance of thinking in terms of sequences of actions. So now, What’s Up With That will say, “Here’s a sequence of analytical lenses we recommend that you subject this document to,” and with a click, it’ll go and do that for you—it’ll do that cognition for you and then just give you a report.

The third thing that it does is probably—it, the shorthand for it is compound learning. You don’t have to remember all the things that you read anymore, because our system extracts the causal claims from everything you read, archives them, and then compares everything you read in the future that you analyze with our system to your library of causal connections in the past, to say, “Whoa, we just found a chain of claims that could surface a multi-step risk or opportunity that’s relevant to your work.” We do that both for your data exhaust—your history of things you’ve analyzed—and we do persistent monitoring of the web to detect anything that could be relevant to a project or chain by that same kind of symphonic synthesis and connection. So those are the categories that it has.

Ross Dawson: Yeah, I think you’re only scratching the surface of what your tool actually does, and obviously, more generally, these are just pointing in wonderful ways to how you can go beyond saying, “Tell me about this, ChatGPT,” to some far more nuanced ways of getting AI to do it.

Marshall Kirkpatrick: People have had the same challenge with Google, historically. Google has struggled with that, to figure out—”I’m feeling lucky” was probably the first intervention in a novice, beginner’s mind, coming to a hyper-complex opportunity space. Even still, now, 20 years since Google launched, I feel like you can tell people that they can search for “site:domain keyword” to find instances of that keyword not in the web at large, just inside that specific domain, and most people don’t know that. It’s a simple power, and there’s a bunch of things like that.

So figuring out how to unlock—and I don’t know how much they’ve even worried about it, because they’ve got that cash cow of advertising—but people don’t even recognize, sometimes, whether they’re clicking on an ad or a search result. In polls, when people are asked, they say, “No,” even if they put the ads at the top or mark them as ads, or a bunch of stuff they do do, but nobody notices. So that interface of complexity and accessibility and scale—we’re in it again here now, in this generative AI era. There’s so much more that could be done than is immediately obvious. It’s a real challenge.

So I’ve taken the approach that I have, which is to roll up a bunch of that and turn them into buttons and recommend them automatically and try to recommend them just in time, and stuff like that. But I’m sure lots of different people are going to try to respond to that gap of simplicity and complexity in different ways.

Ross Dawson: Yeah, that’s—which comes back, I think, a little bit to, you know, I firmly believe that the heart of the future is interfaces. We have these extraordinary capabilities—against finite cognition and infinite capabilities, let’s call them. That’s very much to the individual. The adaptive interface, I think, is going to be absolutely critical. All right, well, it’s after lunch and I’m not feeling so—the interface adapts to you.

Marshall Kirkpatrick: So I heard you say that.

Ross Dawson: The interface adapts again.

Marshall Kirkpatrick: Right? I heard you say that in a conversation with Ramez Naam some time ago. I was listening to that interview that the two of you did together while I was playing hacky sack out in front of my house. I grabbed my hacky sack and I said, “I’ve got to go inside and do something about this idea of Ross—yes, interface variability.”

In that case, I did a little experiment that I didn’t implement because I decided not to, but the general idea I want to pursue further, and I’ll tell you what that experiment was. One of the capabilities inside of What’s Up With That is that you can get a reading review synthesized, so that instead of just a list of links, you can get a narrative document exploring the themes, weaving together the last ten articles that you’ve read, and it’s easier to remember and to think about. I decided to hit the Nanonets API and have an image put up at the top that illustrated the themes. Now, maybe it’s just because I read a lot of dystopian AI, authoritarian politics type of stuff, but the images were terrifying, and they’re kind of expensive and slow, and they also look kind of repetitive. I was like, “All right, Ross, I haven’t cracked that nut quite yet in the variable interface, but I think you’re really on to something there.”

Ross Dawson: I’ll try to work on that too, a little bit. So coming back to this wonderful thing we laid out, alluding to some of the wonderful ways we can use for really rich investigation of ideas and how to think. It comes back to this frame of mental models. All of us get our mental models from the moment we’re born—we get this understanding of the world, which is hopefully useful. Sometimes, some people’s mental models are not very effective in guiding them in how they work. Our role is to continue evolving, getting better. I call it enriching mental models. Back in my first book, I talked about that, and of course, that’s in the context of the world changing, so mental models can’t be static anyway.

In a way, what you’re pointing to is the many, many ways in which we can, at one point, improve our mental models. All right, I understand this linear lineage of thinking, and I can see the strands between that, and these neurons are connecting in my brain in some form. But how can we pull to that bigger picture of all of this lattice of things to be able to say, “All right, I am actually thinking better through these interactions”?

Marshall Kirkpatrick: You know, I think that there is a visceral sense—a sense of safety that can come sometimes when a new mental model illuminates a risk that you hadn’t considered before, and you breathe a sigh of relief and say, “Oh, thank goodness, I can now account for that.” And there’s an excitement with opportunity. There is something about a collective greater-than-individual opportunity here, because it’s tempting to—I’m not sure what that looks like, but I feel like there’s some social and interpersonal and network-based.

One of the other things I do is build systems for network self-awareness, to build metacognitive network monitoring kinds of systems. I feel like there are mental models on that level as well.

Ross Dawson: So I’ve got to dig into that—metacognitive network monitoring. Explain

Marshall Kirkpatrick: Yeah. So every one of us, and our organizations, exists in a network of customers, suppliers, competitors, regulators, thought leaders, with orbits that extend out. The signals are strongest in the closest ones, and perhaps they are weaker and harder to hear, but really significant coming from outer orbits—even from other industries or other topics. It is overwhelming.

It is cognitively uncontainable for any of us to keep up with all the work being done, all the thoughts being shared, all the new developments and opportunities from all the different entities that we’re interconnected with. One of the other offerings that I build for organizations is a system where I go out and map as many of those as possible with people. Those might be your target accounts you’re wanting to sell to, or your peers in a community of practice. Then I set up systems, basically using RSS, email newsletters, web page change notification—the technical underpinnings—to say, especially when organizations are—there are some forms of communication that organizations do naturally by default, and those tend to be speaking to their own customers.

If you can listen to what organizations are saying to their own customers at scale, you can pull in a large quantity of signal, and then the challenge is to winnow that down into just the filtered signals that are most relevant to your priorities. I’ve got a system that uses AI to do that. Then there are combinatorial possibilities as well. I’ve started merging that in with What’s Up With That now, for example, where when we’re watching your broader network and a signal gets picked up on the back end, we’re generating hundreds of possible scenarios for that signal to intersect with your work and projects and priorities, and then we’re filtering to say, “Yeah, but tell me just the subset of these that are most significant and imminent and actionable and interesting.” If there’s something, then we will alert you and tell you what’s going on. Otherwise, you never hear from us, and you just go about your business.

But a couple times a day, I get alerts. Yesterday I got an alert that said, “Hey, one of the founders of Manus, the AI platform that Meta just acquired for $2 billion, just got detained in China trying to go back to Singapore. Given your interests in AI and anti-authoritarian politics and the infrastructure battles around AI, we thought you might want to know about this.” I said, “Thanks, What’s Up With That, I really appreciate it.” That’s an example of the sort of thing—so that’s how I do it. Other customers will take that and use it to populate a podcast or a newsletter, and do both an intake and an output as a conduit of that kind of network self-awareness.

Ross Dawson: Yeah, well, as you know, my kind of—my metacognition is my mantra. I think one of the key points is this simple question: How can AI assist me in getting to a point of metacognition? I would argue, if we use AI even vaguely well, it’s already doing that, because you’re saying, “Okay, well, let me think about what I can do and what the AI can do,” and you’re starting to think of that system. The only thing that enables this humans plus AI is metacognition, because you can actually see above and see your role and the AI’s role. I think this broader question of saying, many of the things you’ve been talking about are how AI is helping us to get to a point in metacognition.

Marshall Kirkpatrick: Ross, can I ask you a question adjacent to that? I think I am not the only one who wants to know, perhaps—and maybe this is a trade secret, I don’t know—but how you think about your analysis and sharing of scientific research papers online? You’re so good at that, and you do a lot of it, and it’s really valuable. It comes to my mind when you talk about metacognition—what role does that function, what are you doing there, what role do you see that playing in this bigger conversation?

Ross Dawson: Well, I’ll just tell you the mechanics of it, which might partly answer your question. I go into, often, three or four of the AI engines, including Grok, actually, because it’s very good at search. I say, “Tell me the most interesting research papers in the last few weeks,” whatever—on, I might say, human-AI collaboration or AI and strategy, whatever it might be, just different frames. Then I go and look at them. To be frank, I probably should do some more filtering with AI and tell them, “Only from reputable authors,” etc., because I have to just look at a lot of stuff, but that’s useful in its own right. Then I start to see, okay, this is a paper which is not only interesting, but actually would be useful to summarize for other people.

I do a lot of surfacing—a lot. I’m very quick at scanning, so that’s just a mental process. At that point, when I found the paper, I’ve got a Gemini gem and an OpenAI GPT, both of which I call Insight Distiller. Basically, I stick the paper in there, it comes out, and I always rewrite it. I will either prompt the AI to improve it in various ways, and then always just rewrite or choose which of the points I put in, and so on. So there’s actually a fairly manual process, but very, very AI-assisted. To your point, there’s so much extraordinary research going on, and people don’t look at it. The function, I think, is what you’re alluding to—it’s just like saying, “This is the essence of a paper, and you can read it in a few minutes and get some really good insights, and hopefully that will inspire you to go have a proper look at the paper, because there’s a lot more in there.” To myself, of course, going through all that is enormous and valuable to me, but it’s useful to others too.

Marshall Kirkpatrick: Absolutely, wow. That is a high-touch. That’s great. I bet you really have a lot of compounding learning as a result of it.

Ross Dawson: Yeah, it’s kind of this thing where, just the nature of how my brain works and my immersion in stuff, I think it somehow gets me to some decent understanding of what’s going on. So to round out, what’s the next phase? I think this is an extraordinary time, but in the frame of what we’re talking about—AI and cognition—from your perspective, or just the world’s perspective, where do we go from here?

Marshall Kirkpatrick: Well, I think that it comes down, in part, to values. I can’t help but think about this K-shaped future that we risk moving towards, where some people are using all kinds of augmented capabilities and building on top of past experience and education and what have you, and income inequality just gets more and more intense. The gap between people who are excited about this stuff and can use it, and everyone else, just gets all the bigger. That’s not good for anybody. I really hope that isn’t the case. I’d love to get the J of exponential change without too much of the K of increasing inequality. I think that’s the direction we’re pointed in, but I do hope that we can democratize access to a lot of these capabilities and figure out how to use them in partnership with other ways of thinking—like Azeem and his team, writing on paper, like some of the indigenous traditional knowledge practices around the world that are very place-based and around ecosystem balance and recognizing humans as a part of nature, working with AI and technologies. I’d love to see this be an additive experience, more than a destructive experience for humanity and the rest of the planet.

Ross Dawson: Yeah and that’s why you and I both working on is doing whatever we can to nudge things in those directions. So where can people go to find out more about your wonderful work?

Marshall Kirkpatrick: Well, these days, I am pointing people mostly to whatsupwiththat.app. That’s kind of my home these days for all the different work.

Ross Dawson: I’ll recommend it.

Marshall Kirkpatrick: Oh, thank you so much, Ross.

Ross Dawson: Very useful, and I’ve only just begun to use it so—

Marshall Kirkpatrick: Awesome, well, let’s stick some of those papers in there and red team it and hit “Find Science” and get other scientific reviews of the claims in the paper, etc. Thanks—it’s so great to be back in touch with you here and not just watch from a distance, but to get to put our heads together like this is a real pleasure.

Ross Dawson: Thanks so much, Marshall.

The post Marshall Kirkpatrick on cognitive levers, combinatorial possibilities, symphonic thinking, and compound learning (AC Ep39) appeared first on Humans + AI.

No transcript available.