Episode 1: Jon Rawski
[Music]
Vincent del Casino: Today, we're gonna have an amazing conversation with Dr. Jon Rawski, who is an assistant
professor in the Department of Linguistics and Language Development here at San Jose
State. We're gonna touch on topics of the intersection of math and language, the future
of natural language processing, and how we think about our relationship to everyday
language and computational thinking. So come on board. This is gonna be a great conversation.
All right. Well, it's exciting to be here with Jon Roske, who is an assistant professor
in the Department of Linguistics and Language Development here at San Jose State.
So thanks for being with us.
Jon Rawski: Yeah, good to be here with you, Ben.
Vincent del Casino: So I always like to delve into, as a starting question, how did language and the questions around language become of interest to you? Because clearly you've been studying this for quite some time now.
Jon Rawski: Yeah, that's right. I mean, I'm tempted to say that language is just inherently fascinating. I've yet to come across somebody who doesn't think that it's fascinating. For me personally, I started out as an economics major because I was interested in applying mathematics to social issues, which were two things that I was very interested in, but I very quickly decided that I didn't like economics personally. Um, and linguistics, I discovered it in university, much like most people do. It's a discovery subject. Um, And so once I took a course in it, I was very intrigued, um, because the problems were inherently fascinating and I still find them just as fascinating. Um, but I was mostly intrigued more by the problems than by the answers that I was given as an undergrad. Cause as a student, you're given kind of nice, tightly packaged things to work on and develop your skills. But I kind of noticed that a lot of things were, there were these sort of deeper issues that were really tricky, particularly relating to how linguistic theories are constructed, how the mental representations of things work. And I still find them there. And I have to say that as I've gone along, the questions have still remained fascinating and the answers keep coming. But it's just there. I mean, language is fascinating. You know, it is one of the core aspects of human nature. And so studying it always, you know, the little scratches that we make on the surface tell us just that little much more about what it means to be a human.
Vincent del Casino: Yeah, no, I find that really interesting. I started studying second language Japanese when I was an undergraduate, eventually minored that, live in Japan, and eventually I learned Thai and some Lao and Northern Thai. And it's fascinating to be in a language, for example, where you don't need a subject. Like Thai, you don't have to have a subject, they don't conjugate verbs. It's all context driven. So there's all these questions that come up that are really interesting, so you have, so. The other thing that ties in, which I think is fascinating because a lot of people don't know the relationship, is the cognitive science side. So you've got linguistics and people think it's about language and language learning and obviously it is, but there's this whole interrelationship with cognitive science and science more broadly. I think people from the outside looking in may not understand linguistics that way. What is the relationship between those two disciplines? Because your master's was actually in cognitive science.
Jon Rawski: That's right. So it's true that most people, well, so first of all, people don't really realize that language is something to study scientifically. And that's again because, you know, at least with something like physics or biology, you're exposed to that style of thinking from, you know, age seven or something like that, right? When you start studying, you know, physics, physics. But linguistics, it's not really exposed to people how one could even study something like language in using the scientific method. And so, in fact, a lot of students are quite surprised by this. And one difficulty with that is that language is so inherent to what it means for us to be human that we tend to just downplay it. And so there's a lot unconscious rules that we as humans just know, use, and learn that we are doing so unconsciously. And so studying that scientifically is the core aspect of linguistics. Now, that's not to say that's the only way to study language. Of course there's a zillion other ways out there that one could possibly study language, and the reason that I think linguistics is so interesting and demands a scientific study is because it is hard to think of an area of human anything that is not touched by language. I mean, think about what we're doing right now. It would be virtually impossible to do anything remotely related to what we're doing. The way that language works, you know. In fact, I could meet people, I've just met some of the people in the studio today, I've never talked with them before. And yet, within less than a second, we were able to converse with each other, understand each other respond to each other and communicate complex ideas to each another. And that's uniquely possible through language. And the human species is the only species that we know of that has anything remotely like language. And that is very surprising, right? It's very strange. It's so strange, but it's so obvious to us because it's a completely normal thing for a human to learn, know, and use one of the 7,000 languages that are on the planet. Every human being will learn at least one of them, assuming normal conditions, right. And so that's somewhat puzzling. So if I have a child, and the child is born in wherever they're born, they will learn the language of that community. If I have a cat raised at the same time as said child, presumably hearing much of the same things that the child does with many of the same experiences that the trial does, it won't do anything remotely resembling that. As much as I'd like to think that I'm talking to my cat, I don't, right? It doesn't know English the way that the child just inherently knows English, right, or Japanese or Kenyawanda or Thai or Lao or any of the other. In the languages that we hear of. So that's the second surprising thing, right? Is that language is ubiquitous, right, in that every human will do it, and it's hard to find an area of human anything that doesn't touch by language, right. So that means, and in fact, if you look at many of the different disciplines at SJSU, right it's to find one that touches on what it means to be human that doesn' have a subject on language. So if you're a psychologist, you will study, you know, the behavioral correlates of language, right? If you're in computer science, you will study what it means to have a language such that it's understandable, producible, computable by something, right. If you are an anthropologist, you want to know what is it about a society such that it can organize around a speech community, and so on and so forth. If you in literature, you want to know what are the different varieties of ways people communicate complex ideas through a particular medium. That is literature. And so the linguists just say, well, they're all involving language. So there is some scientific object there that is language. And the way that this relates to cognitive science is through, I like to use a, I mean, I kind of like this metaphor that language is a sort of window into the mind. That when you study language, what you're really looking at is sort of a shadow cast by human cognition. And so you kind of look at how language works and from there you infer. Things about how the human mind works.
Vincent del Casino: Well, that's really interesting, because in one of your papers, for example, you talk about it's also a representational system. So it is both potentially always an abstraction, right? And at the same time, trying to explain something real. But cup could be any word possible, but we've assigned it to something. We've given it meaning through that assignment. So how do you how do linguists think about that question of the representation and the abstraction and how it gets played out? Because and and then the power that comes with all of that.
Jon Rawski: Well, so right, you hit on this really interesting idea of abstraction, which is, again, something so intuitive to us, but it's extraordinarily difficult to study. I mean, the real answer is that we have no idea how things are represented in the mind. It's extraordinarily difficulty. You have this blob of tissue in your head that does all sorts of magical things. And we know that that's where language has to be sitting for various empirical reasons, one being that, for example, if I have a stroke, some of the language abilities are just disintegrated through various aphasias and so on and so forth. If I lose the use of a hand for example, or a leg, nothing happens to my language system.
Vincent del Casino: Unless you grew up Italian, then you're always using your hands.
Jon Rawski: Well, so I should have said the gesture, for example, is a very, you know, recent interesting study aspect of language, right? But remember, that's a way that language is externalized, right, although it's certainly true that, for examples, were we to sit on our hands for the remainder of this conversation, we would notice quite a degradation in our conversation, just again, because of how we are communicating these things. But as for abstraction, it's true that language has to be an abstract system. And the reason for that is that the brain does it so well. And one of the first things that you learn about studying any sort of complex system, whether that's language, whether that is vision, whether that memory, or anything, any of the other psychological constructs that we have, is that, the process underlying it, which is the central metaphor in the cognitive sciences, that thinking has to do with cognition. Our thinking is too vague. We need something slightly more robust. And that idea is there's a process going on. That linguistic process is what linguist study. All right So the idea being when we do something like comprehend a sentence what's going on, right? After it goes into the ear, right or the eyes in the case of sign language What's what's? Going on? How do you parse this signal? All right, this very complex Blooming buzzing confusion as William James the psychologist would have called it How do even tell what linguistic data is which involves a healthy degree of abstraction and then use it manipulate it mentally in order to derive a complex meaning from it. And then in reverse, say you start with a meaning, what are the particular ways that that meaning is expressed linguistically as opposed to any other way that you might express it, right? And so the idea is that language is merely one of a zillion different ways that humans are able to communicate. So the first surprising thing about studying linguistic cognition is mostly that language's really not there for communication. It's one of the ways that humans communicate things, but humans communicate things in all sorts of ways. And so language being one of them, right, is sort of an interesting, like, there's all sorts of weird properties that language has that don't really make sense from a communication standpoint. And, so, that's the second surprising thing, right? One, it's everywhere, and yet it's terrible for communication of so many, you know, under so many metrics. In fact, in the 50s and 60s, when communication theory was starting Propagate throughout, you know the world and is responsible for most tech that we have right now People understood that language is not very optimized very well for communication, right? There's all sorts of weird little properties because it's an evolved system That just don't make sense if you're trying to design a communication system, right And so that's the second surprising thing. It's this abstract computational system that every human possesses seemingly from birth that allows them to just be plopped somewhere in the world, and then without much instruction, learn one of these languages, or multiple, which is most of the time what happens. So the abstraction bit comes, there's two ways to sort of study that. The empirical way is to say, well, look, we have these sorts of phenomena that we see in the world in various different languages. So let's look at all of those and try to infer what the system is supposed to be. And the second way to study it, slightly more mathematically, is just to sort bite the abstraction bullet. And to say, well, we're dealing with an abstract, structured, complex process. And so whenever there's structure and there's processes, there's mathematics. And so that's where the mathematics comes in. So there's two ways of dealing with abstraction. And so the linguists kind of try to do both. In the same way that, say, a physicist would go and look at different physical objects and say, well, I'd like to understand the fundamental laws of how this system works. So I could do that either by going out and doing experiments, or I could that by biting the theoretical bullet and saying okay, well let's look at how such a system could work in principle in the first place.
Vincent del Casino: So in one of your papers, you write, and I want to pull on the thread of the computational next. Because obviously, this is the moment. I mean, chat GPT hits the scene. Natural language processing, like those three words pulled together. People didn't talk about that even two years ago. Now it's everywhere. But in the paper, you wrote something really interesting. And I want you to pull the thread with me on this. It is important to disentangle the notion of computation in cognition from computational models of cognition. Play with that for me. Kind of pull that apart and explain. Because I think there's something very, very important here. Because I raise this because obviously we're doing computational work and yet reproducing that. For lack of a better word, mechanically, is very, very difficult, right? That's right. So how does, what does this mean for us, and how does it impact your work, and how we think about sort of computational thinking?
Jon Rawski: I mean, the study of computation, I mean as an object, you know, has been going on for a long time. It's probably most famously associated with Alan Turing, but it's gone way back at least, you know to the, well at least to people like Ada Lovelace and so on and so forth. But you know Turing was the first to try and think about what it would mean for a human to do something as simple as calculating what's called a real number. Which is a number that's one of the infinite numbers between zero and one where you can have as many decimals after it as you like. And so Turing's real central insight was, imagine I gave a human as much time as they would like. I gave them a paper and I gave him a pencil and I try to have them calculate or even write down some of these numbers, right? His central insight was that actually you can't compute by which he literally meant write down a real number, like all of the real numbers. And so his idea was, okay, but some of them I could write down, right? Or I could execute an algorithm, which is just a step-by-step process for writing those down. And so he'll call those the computable numbers. And so what he realized was there was these central limits to, you know, even something as simple as calculating numbers. And if I could encode some process as one of those numbers, well then it implies that not all programs, not all functions, not all numbers, not all behaviors, not anything. Is doable by a human, right? And so there's a tight link between computation and cognition. And it's a tempting metaphor. Now we sometimes think about, oh, we use computational metaphors all the time talking about behaviors, because it's so ubiquitous. And language, of course, picked up on that very quickly. Because if language was one of these types of computation, what kind of computational system was it? And so that's a way of linking the computational structure of language to what it means to compute something in the first place. And in fact, it's so fundamental that if you go and take a theory of computation class here, you'll be confronted immediately with the notion of a grammar, with a language, with proofs about what kinds of grammars and languages can be done in the fri- can be computed in the first place and what that means. So that's the notion of computation and language as sort of the same thing. And that's been a very powerful result since probably 1950, about 70 years of work. Most work in linguistics, I would say, is now trying to explore that metaphor. And the second, as you mentioned, is I think you said reproducing something language computationally. You hit upon something there which is reproducing. So if we think about the modern term AI, stands for artificial intelligence. The first thing to remember, I mean, it's common to now talk about the artificial intelligence revolution. But I tend to think that there wasn't really a revolution, because we're using stuff that's quite old. But we are using stuff is very artificial. And so now the main idea here, and again, I said, this is why language is so central, because most of the advances that you see in AI today are touted as How do we know a system is behaving intelligently? Well, if it can convince us that it somehow knows something about language, to our satisfaction as humans, this system can mimic linguistic behavior well enough that we, the humans who have it intentionally, are somewhat fooled by it, right? And so that's another reason that it's important to study language in the first place. How would you know if the system works if you didn't have an understanding of it? Why are we so impressed by something like ChatGPT? Was because we're linguistic agents, right? And we can recognize, oh, the thing is doing something that looks linguistic in the same way that when I get on an airplane, I know it's not a bird, but I go on it and it flies in the way, or similar enough to me that I'm happy with it.
Vincent del Casino: You know, it's really interesting, you know, is that the Turing test, right? The ability to be convinced that you're not talking to a computer but another human. What's interesting, and why I open that question, because you're right, and this is something people miss very often. The theories of really mostly its computing power had to catch up in many ways in order to do the sorts of things. But interestingly, and I had the opportunity of being a group of folks. Talking about, you know, the future of computing and humans and data and so forth. And one of the really interesting, it is that question of cognition and consciousness and these kinds of questions which are different. But where we are now is in this place, and you pointed out in one of papers as well, where, and this is, I'm a geographer by training, one of things, maps are abstractions. Because you don't map everything on the planet. That's right. And yet we're getting to a place where we're ingesting so much data that we're down at levels where where is abstraction in all of this because we have so much information now and it's all getting pushed through these machines and then these machines are being educated and trained and so forth. So how do we think about where language fits in all that because the ingestion of all these data. And the amount of computing speed just really let us do something.
Jon Rawski: Well, it's really tempting to think that there is less abstraction because it gets so as you said so much more data But that's just false. It is there's abstraction there. There's always is right? It just depends on where is it? Unfortunately, these systems are designed in such an opaque way that it's almost impossible to go probe them And so that's the really annoying thing about them Right is that even if you're interested in like what they can do per se It's difficult to go and look in and go. That's that's language bit that's where it's doing something like agreeing, you know, a subject with a verb or something like that. It's extremely hard. And a lot of people have done a lot advanced statistical work trying to look at, you know when it's a doing a linguistic thing, can we poke around and see which bit of the machine is doing it, right? But we understand almost zero about it. And it's not really clear what we would learn even if we could. Because again, there's a sort of observer effect going on, that we would like to think that there's something linguistic going on in there, but that's probably not the case whatsoever. In fact, just because the system outputs something that looks like human language, doesn't mean it's doing anything remotely resembling, one, what the human is doing, when it does something linguistic. Or for that matter, that is doing anything linguistic at all, all right? It just happens to be the case that we're convinced by it. In the same way that say, oftentimes people who are so-called psychics can seem really convincing. And yet, you know, there's just a lot of underlying tricks to it. So one of the stories that I like related to this, Matt, maybe you've heard about it. And in the 1800s, well, actually for many centuries, people have tried to have these sort of trick animals like pigs and horses and so on that can seemingly do human stuff like add numbers. Spell words out or you know tell the future or something like this and there was one horse in the 1800s sort of Nick in Germany named clever Hans and Clever Hans was at one of these trick ponies and people were amazed that the trainer could point to two numbers and Hans would add Them up people in the crowd would you know try to do it and so on and so forth Immediately people thought oh the horse is either cheating so we're getting scammed or the horse's brilliant Therefore there's nothing special between horses and humans And it's all just due to how we're trained and what sorts of data they get and so on. And I can make any animal do anything I like, right? And there was a scientific commission that was commissioned to test and disprove clever Hans. And they came away with, well, no, I don't believe that the trainer is scamming anybody. He honestly believes that Hans is doing it. And ultimately what was learned about it was that Hans was in fact not able to add numbers. What Hans was doing was sort of very carefully observing the body language of the trainer. And when the trainer knew the answer to the question, say like six plus 584 or something, then the horse could do it. If the trainer did not know the answer, but Hans was still asked it, then Hans couldn't do it, so Hans was able to produce behavior that looked very much like the thing people wanted, but it was cheating in a way. And so these modern AI systems are built around extremely clever cheating. And I don't mean that necessarily in a negative sense. It's fascinating that you can approximate lots of behavior just by really shallow, clever cheating methods. So engineering-wise, that's fantastic. Because it means that a lot of things that you thought you might need to get a system working, in fact, can could just be due to very clever statistics algorithms. Plus a bunch of data, right? Which really wasn't known before you had the ability to look at a massive amount of data. So for engineering purposes, that's totally fine. For scientific purposes, it doesn't tell you squat. It doesn't you anything. Just because, I mean, imagine this. I want to go and try to understand the fundamental laws of gravity. So I go and I take a big videotape and I film outside my window in Clark Hall. And I try to look at various leaves falling or something like that, right? And from there, I try to do massive amounts of statistics on that video and then I say, okay, there's a bunch of correlations and I've now understood the general patterns and I can predict with confidence how a leaf is gonna fall. I mean, you'll get laughed right out of every physics department if you try to say that because scientists are interested in explanation, not prediction mostly. If theory is predictive, that's pretty good. But you can have a totally predictive theory of something and not understand it whatsoever.
Vincent del Casino: In fact, most of them.
Jon Rawski: Right, in fact, that's how a lot of pseudoscience operates. You're very predictive, but you don't explain anything, right? And so the scientific perspective is we would like to understand something about the world. So Now there's a there's this interesting tension between these engineering approaches Which are so ubiquitous and seem to do language behavior, but have absolutely nothing to do with language whatsoever, right And so that's rather tricky But again and again, what we see, and this happens in field after field, is that understanding the fundamental laws of language always helps you to understand a system that's gonna use it. And again, it's worth noting that we view language so highly that it's a hallmark of mimicking intelligent behavior to even be able to kinda closely approximate a computer. So that's the difference between language as a, studying language computationally. And modeling language to some degree, right? And it's a very important difference. And I'm sure that's true, you know, it's not just linguistically, like every field has this. So I'm certain geography isn't analog to this as well.
Vincent del Casino: 100%.
Jon Rawski: Where I might want to understand the things that go into, you know, studying geography more than I want to, you, know, model or something that's going on. Right. And it's a really important difference. And so I think the real underlying issue here is that when we look at computation and mimicking systems, we really have to be careful not to delude ourselves into thinking, because we are linguistic agents in the world. That the thing we're looking at is also a linguistic agent. It just looks like it because, for example, we're used to looking at text. And because it outputs a sequence of text, well, we interpret that a certain way.
Vincent del Casino: Yeah, that's, I think we probably pull this thread for a very long time. It is really valuable, but I want to be, I don't want to miss the opportunity to kind of ask you where your work is going. You had an opportunity to be at Massachusetts Institute of Technology for a semester doing some things. So where is your work right now? What are you interested in? What kind of questions are you working on? Because I think it's fascinating to think, and I love the fact that you're here and helping us build programs at Crosscut. Computer science and linguistics and other things like that.
Jon Rawski: Well, so I take firmly the view I just said that languages, the computational lens, as I call it, is very informative on to what kind of thing language is. So one of the deepest problems related to language, and one that comes up all the time, is that humans learn them. So a child is born and it hears some data, mostly from parents, otherwise in the linguistic community, and that data is terrible. So we talk about big data systems. The child gets terrible data. It hears redundant speech stuff, there's speech errors, there's 60 people talking at the same time, it doesn't know who's saying what, it has no idea what the linguistic constructs are, and yet with very little instruction, the child will robustly and quickly and stably understand a language. And so that's a very difficult computational problem with a mathematical sort of impossibility at the core of it, which is the child gets a finite amount of data, even if it was perfect, right? The child gets a little bit of it, and from there they infer this very productive infinite system that we call a grammar. So the third linguistic fact, right, one is that language is everywhere, it's abstract. The second is that, you know, a language is sort of epiphenomenal. It's hard to pin down what a language. But one of the central insights of the 20th century was that the central object of linguistics is the grammar. And so the idea that a child has to infer a grammar, i.e. Not really a book, like you're used to seeing, but just whatever this structured system of mental rules is, that's a stunning computational problem. It's in fact what the core of the machine learning revolution or the AI revolution is predicated on, that all we need is more data and the thing will just learn the grammar. Well, that turns out not to be the case, because you can give an AI system access to orders of magnitude more data than any child will ever hear in its entire life, and yet it still makes stupid errors. I can go on and try to use ChatGPT and ask it, how many R's are in the word strawberry? And it will relentlessly tell me there's two. Even when I say no, count them. It'll go, I see there's three R's. And then you say no it's two, I mean really stupid errors like that. And so that tells you when the engineering methods are sort of not working and there's a sort of scientifically curious object, then there's room for mathematical work. So my current work is to try and understand what are the processes by which a child robustly infers a grammar from the types of, from various, what we call sparse data, data that should be there but isn't. And so the child has a lot of inference work. The child is a very powerful inference machine. And so we'd like to understand how that's going, in particular such that all the Generalizations we would know about are doable by that particular what I call an algorithm So I'm very interested in algorithms which will take in data and under certain circumstances will 100% of the time provably, and that's where the math comes in, will provably tell me that's the right grammar. And so we'd like to know what is the relationship between this sort of, what some people would call an innate structure, which is the genetic blueprint for language that seems to by necessity be there. How does that sort of constrain the learning process? The fact that you are primed to learn a grammar of a specific type tells you that when I hear linguistic data, I don't just come in blind. As the 1800s biologists would think, I'm not a blank slate. I have a significant amount of internal structure, and that constrains the learning problem. So the fact that humans know a lot about language already, it begs the question, what do they know? Such that when they encounter a date in the world, they can go, aha. That's a subject, right? How do I pick out the notion subject of a sentence in the word just from hearing it? I mean, it's a tremendously difficult problem. So there's gotta be something that's there that's constraining the learning such that when the child hears it, they have some mechanism which will go and say, aha, that's now German. So in the 80s, there was this idea that it was sort of like a series, like one of those old telephone switchboards that you had these sort of triggers that you would hear, oh, look. I hear subjects before objects. So OK, I'm going to flip that switch. I could have heard the reverse. And so by flipping a series of switches, you would arrive at, say, English versus, I don't know, Kenya Wanda or something. Nowadays, we'd like to think, OK, that turns out to be a very difficult problem, because the amount of switches grew extremely large. And it's, OK I get a piece of evidence. Which of those zillion switches do I flip? It's very hard. And it turned out to be so hard that it's infeasible. So now we'd like to think, okay, we can't have that. But we need some other representation in there that's gonna say, okay, I have access to the data and I have the grammars. And the grammers have to be organized in such a way that testing one out has to rule out a bunch of other ones, right? And so for example, if I knew that I wasn't in, say, a German speaking environment, I would be able to kind of wash away all the stuff that could have that feature with it if I understood that it wasn't part of my grammar selection process. And so I'm very interested in that question. Of how the child selects a grammar from data, and also the question of what kinds of grammars is the child selecting when they do it? Like what is the internal structure of what we would, in computer science, call the hypothesis space? What kinds of guesses are children allowed to make about what the grammar looks like? And those constraints on them have this term that's sometimes called an inductive bias, which maybe if you're in philosophy you know this, or sometimes... The linguists use this term, a universal grammar. It's just a general form for how a hypothesis should look. Or the computer scientists might call it a meta program, which is a way of constructing programs. Right. And so that's one thing that I'm personally very interested in. And then I also do quite a lot of work on sign languages, trying to understand the difference between if language is expressed either by speech or by the visual manual modality, does that have effects on the way that those computations are able to be carried out. If I have a one dimensional speech signal versus a three dimensional. Visual manual gesture? Could I do things in sign that I can't do in speech? The answer is yes, and vice versa. Are there things in speech that are doable, that aren't doable in sign? That's certainly true. And then probably the final thing that I'm interested in is what are the limits? Are there what we might call impossible languages? That is where I could imagine a language but a human could never learn it, right? Or something that say an AI system could do but that we would never expect a human to do, right. And so that question is a little, it's very tricky because we only know a little bit about the languages that we have. There's about 7,000 of them. Very difficult to collect data. You've spent a lot of years, five years of a PhD, going and sitting with the community members, doing a lot complex experimentation to get a little bit of data. And so mathematically we'd like to know, okay, well, it's hard to get data. Are there languages which just are totally out, right? So imagine a language which is only allowed to have an even number of nouns in a sentence. That's weird to think about. It's very easy to, I mean, I could think about it, but we don't really find a language like that, right. Or languages whose, you know, which are only allowed to have, well, let's just leave it at that one, right? So I mean, there's tons of different languages which are feasible, but are just totally impossible for reasons. So we'd like to draw that boundary somewhere, if there is one. What kind of boundary is that? And if there a boundary, is that, again, another clue to how the child is learning these languages, right. So there's three ideas of limitations on what could be a language, limitations on how the children is learning a language. And limitations on how languages, when they're expressed physically, like how that affects the underlying system. Those are the things that I'm centrally interested in. And I think the mathematics now is at a point where we can do that. And that's what's really exciting now is that we have these mathematical insights that really weren't there in the 50s. Right. You know, and there's an inner play. Oh, one of the nice things I like about studying language mathematically. You don't just apply mathematics to language. You get a lot of insight from doing that for sure, right? One physics paper said that mathematics is unreasonably effective. There's no reason it should work that well in describing so many of the natural systems we see. But there's a two-way street, and that linguistic insight informs the other fields. So as I mentioned earlier, linguistics is foundational in this discipline of computer science. It's also, I won't say foundational, but it's contributed an entire new field to mathematics, namely formal language theory and automata theory and so on. It's really advanced the study of discrete mathematics. And what I like is often I attend conferences and workshops where a significant portion are not linguists but they're mathematicians and computer scientists. And so you get an insight from that and the insights from just studying language then produce other insights in neighboring fields. And so I like a field that's able not only to borrow, but also to contribute. And so those are the two directions that I'm interested in, like how can we describe language mathematically and how can language contribute to understanding other mathematical objects?
Vincent del Casino: That is fantastic. Well, I really appreciate the opportunity to have the conversation. I wanna thank you for being here today and being part of the podcast. It's been really fantastic. Thank you so much.
Jon Rawski: Yeah, thanks for the opportunity, Ben