[00:01:00] Sam: My guest today is Joe Mazzone. Joe leads the product development for Coding Rooms, an interactive training platform owned by Wiley, the multinational academic publishing company. During our discussion, we explore how the team at Wiley is addressing the challenges, risks, and opportunities presented by AI. We also talk about how the education sector and the role of teachers are likely to evolve as the technology becomes ever more present in society. Joe is a thoughtful and passionate optimist, and this was an exciting peek into the future of education. So, please enjoy this conversation with Joe Mazzone.
[00:01:36] Sam: Could you share the history and evolution of Coding Rooms, maybe touching on its origins and explaining the inspiration and mission that drives the company today?
[00:01:45] Joe: I work in the education technology space as a software product development lead specifically at John Wiley and Sons, which is a publishing company. And I work on a software product, Coding Rooms. So Coding Rooms was founded by two of my good friends, Sasha and Stan Varlamov, in 2020.
They're a father son duo. The idea was really born out of the pandemic and Sasha having some basic teaching experiences with computer programming. And so Sasha really wanted to help people teach computer science and programming online in that pandemic environment where you couldn't meet in person. And with that passion, the Coding Rooms cloud programming environment was really born. Essentially what Coding Rooms offers is a space for instructors to teach programming and students to complete programming assignments and have some live collaboration, video conferencing, autograding, and lots of other things. Really giving teachers the tools they need in a fully online environment to interact with their students in a more effective way.
[00:02:47] Sam: So you mentioned Wiley and Coding Rooms. Could you maybe just Explain a little there about the relationship how Coding Rooms fits into the business of at Wiley.
[00:02:58] Joe: Absolutely so. Coding Rooms was acquired by Wiley, and we operate as a smaller piece of the org. Specifically, we work on the computer programming environment, so when students go to do a programming lab or write some code in a Wiley product, we now power that environment. And so, we are in charge of when people need programming labs, when people need cloud virtual environments, it's our team that they come in contact in order to get that done. An author may be looking for auto grading a specific programming assignment. How do I give users access to this certain software that I talk about in my title? And we're the team that makes that happen.
[00:03:45] Sam: So, the practical side of the book business that Wiley has in the background. Is that right?
[00:03:52] Joe: Yeah, so exactly. So the learning tools team is what you'd like to think of us as. Wiley, of course, has authors and they have a long history, 200 years of having people write books. Now in the digital age, we have these books online in some way for people to interact with and really Wiley has been providing tools for teachers to use their content in a classroom more. And so I work with those teams, with being able to provide an interactive online textbook and a way to assess students in those environments. So even when it comes to exams and other things like that, specifically in STEM kind of environments we help provide some of those tools for students to access the software that they need and or program right in their web browser and teachers be able to assess that work.
[00:04:38] Sam: So what's your background, Joe? How did you get to where you are today?
[00:04:42] Joe: For 10 years I was a computer science and engineering teacher. And so I connected with Coding Rooms and Sasha and Stan as one of their first customers. So pandemic happens and I'm looking for solutions also. And I stumbled upon these guys and we become friendly. And I really believed in the mission that they had to improve education with a tool like Coding Rooms. So I started to work with them part time and then eventually I left teaching and became the full time product manager. And then obviously as we went through our startup journey we were acquired by Wiley.
[00:05:14] Sam: So what does the rest of the team look like that you've built during your time with Coding Rooms. What's the makeup of that team and how large is it?
[00:05:25] Joe: Yes, of course we were different when we were a startup. Now, mainly as an entity at Wiley, the Coding Rooms team is really a smaller engineering team and so I, lead that team when it comes to the product vision and the engineering management aspect. I do have two engineering leads. One works on the application side in the front end piece of it. The other works on the infrastructure in the back end side, and they kind of lead the technical vision there with me. Then we have probably around six individual engineers that contribute to the product. We're really, because we're a smaller team, really collaborative, own different pieces work, and talk together every day. And then what's really cool about our team is we get to work with wider Wiley a lot. So you'll have different teams you'll talk to different peoples, right? And they're super focused on their product and maybe they don't get to interact with people.
But my team, we really interact with larger Wiley teams, can think about Wiley and zyBooks is another platform we're integrated in. We talk to those authors all the time and need to help meet their needs. So we collaborate with them. And same thing with the platform. Like I mentioned, zyBooks is one of the Wiley platforms where we have the interactive STEM textbooks and we have to integrate our piece of the product into that platform. So we get to collaborate with those engineers and that team very often.
[00:06:45] Sam: You mentioned COVID. You were in this position, you had this issue as an educator that you were trying to continue delivering high quality education to your students. There were, I'm sure, a proliferation of tools and technology in the education space during that time. How does Coding Rooms differentiate itself from competitors in the space?
[00:07:13] Joe: Yeah. And I think my answer is probably why Wiley was interested in acquiring us also. And, of course, there's a number of these programming environments online and in ways for teachers to create assignments and autograde. So there's lots of these tools online and our focus and really our differentiator is that we have a really clean, easy to use interface for students and instructors.
So for students, when they're going to complete work, it makes sense how to run code, how to do different things in our environment. And for instructors, if they're going to create their own assignments - of course we have assignments that are authored by our internal team- but instructors maybe want to create some of their own things and ideas and projects they have. And we provide that simple interface that really makes sense for those people.
Yet we still offer something that's incredibly powerful. So any assignment that a student could perform on their normal computer by having them install X, Y, and Z, they can still do it in the Coding Rooms environment. And so providing that simple interface that makes sense, yet still having a powerful environment where instructors don't feel constricted is really where we stand out. Most can't do both well. We see a lot of competitors that have a simple, easy to use interface, but limited in a lot of capabilities. And we see other competitors that can do everything, but man, is it complicated.
[00:08:35] Sam: So looking more broadly, we obviously talked about COVID there, but what are the most pressing challenges that you see in the education sector, especially as they're related to the teaching of STEM, and how is Coding Rooms addressing those?
[00:08:52] Joe: Yeah. So we see a really, I would bucket it into three major challenges in education. Attendance is definitely number one. Students are just not coming to school across the country. And there was even a recent New York times article that you can look up. And students aren't coming to school and you can't learn if you're not coming to school.
Another one is really the mental health concerns with today's youth with the impact social media has, increased anxiety, increased depression, we see a lot of challenges that brings into the learning environment.
And the third is really tied to what we're talking a lot about here today, and probably is my team's number one focus and that's academic integrity, or students cheating in this world of generative AI, where they can very easily go to tools on the internet and get the correct answer or have something do their work for them. That's our big concern that we hear in STEM education a lot. When I talk to instructors in a number of STEM fields that's what we hear is "how can you help me make sure students aren't cheating and that they're in earnest giving me work that they did?"
And so we have two main approaches that we're looking at with this. We know students look to cheat when they're struggling. So typically a student will cheat when they feel back into a corner: "I don't know how to do this. I'm going to be penalized because I'm going to get a bad grade now. I need to find some other way to get this done." So how can we help them not get to the point of where they're in backed in the corner or they're struggling? That's one thing we're looking at.
The other thing we're looking at is how can we give instructors the data and tools that they need to identify and deter cheaters in their class. And that deter part is really important that I don't think a lot of our competitors look at, but we also think that the tools that we give an instructor is something the instructor can have a conversation with their class about, so students know, "I can get caught very easily. I should think twice about cheating."
[00:10:58] Sam: Looking at that first point, how do you support those students that are struggling or falling behind? What does Coding Rooms do to target that particular issue?
[00:11:09] Joe: Our primary approach to helping students really involves this idea of instant feedback, and it became really important in my classroom also. So as I learned to be a good teacher, I realized that this idea was really important. It really comes down to, how does a student know if they're right or wrong? And knowing if you're right or wrong is really the most important thing in education. Because, how can you improve unless you know if what you're doing is the right way or the wrong way?
Most of us really experience in education where you write a computer code, or even just think about you're in an English class, you write an essay for the class, and then weeks later, after you submit this thing the teacher will- if they're a good teacher, it'd be only weeks later, sometimes you get months later or something like that- but, you get that essay back marked up with a grade, some feedback on it. And if you're a good student, you go through and you read that and you learn from it.
During that time, whatever amount of time it took for that instructor to grade that thing, you most likely completed other assignments and you had all that time now wasted without knowing that critical feedback to improve what you've worked on, how you can be better or what you're doing wrong. And so we see in things like math education, right? You go and you learn something in class and you go home and you do homework. What happens when you're doing that math homework wrong the entire time? You're now practicing how to solve some equation wrong and maybe now that wrong thing has stuck in your mind So when you go to do the test, you accidentally remember the wrong way to do it.
So we don't want students to practice doing things the wrong way. We need to provide them with better feedback we don't want them to write another crappy thing, we want them to learn from t heir mistakes and continue to improve and write better things. So they need that feedback. And so technology is really the way that we can help solve this problem. Technology can provide instant feedback.
[00:13:07] Sam: Yeah, that real time feedback it really is key to learning and I see this same thing managing a team, right? There's this concept of the 'One Minute Manager' I came across early in my career that really helped. Something happens, good or bad, and it's that instant feedback, short and sweet. "Hey, great job. I just saw you do this thing. Excellent work. Keep it up" or, "Hey, just noticed. Here's how you went about this particular task. Is there a different way you could have done that? How would you think about doing it next time? Okay, great. Move on." That idea of that real time feedback. Once I came across it, it was game changing for me as a manager, right? Waiting until a monthly quarterly, or God forbid annual review to give that feedback. Like you say, all that time is just wasted where people are doing things the wrong way and you just need to stop it in the moment or Encourage it in the moment and you're going to get better results over time.
[00:14:09] Joe: And that's exactly right. We especially see this in areas where people are new and exactly what you're talking about, I think you would say the biggest benefit is for new employees, new colleagues. The more real time feedback and the more available feedback is to them, the more successful they can be.
And so when it comes to STEM education and especially computer science education, that's usually a new subject for learners. Many people go throughout their K through 12 education and they don't experience a course like that until they get into higher education, they go to college. And then they take a first computer science class, or a biology class, or an engineering class that digs deeper into those subjects, so they have a lot of anxiety about being wrong because it's something new. It's not like maths, or English, or social studies that you've been doing it your whole life this is something truly new, and that feedback can help you gain confidence much quicker because you get a really quick response back about "I'm doing things right, okay, let me keep moving in this direction, oh, that wasn't right, let me course correct." Versus when you're down an entire path of doing things wrong, that's a lot of anxiety on you of, "man, I've been doing this wrong the whole time, what am I supposed to do, how can I turn back?"
What we see is students are in a classroom, they're at home, they're working on something, they are working on a lab activity in the classroom, and I experienced this in the classroom. Students will raise their hand, you come over to them, you visit their desk, say, "what do you need?" And they simply will just look and say, "is this right? Did I do this right?" And you get that from, you maybe think back to younger experiences, you're just doing something on a worksheet, you may ask the teacher, "did I do this right?" Come up to the desk. Or more advanced things like, I'm teaching computer programming. A student has a whole program written and they just ask, "is this right?"
And so I recognized that there had to be another way to do this and that software is really the way to help with that automated feedback. And that's what we do at Coding Rooms. That's what Coding Rooms allows is students can just click a button, submit their work. We grab that, we assess it, and we give them targeted feedback on the items that we're looking to grade. All the learning objectives, " did you create this variable that does, that holds this, does your program's logic do X, Y, Z", whatever it may be. We evaluate them on that and we tell them if they've done it correctly or not.
And of course there's some feedback there too with showing error messages and any other information can provide to the student. And a tool like that really helps them continue to do their work and demonstrate mastery. Instead of a student submitting something wrong and getting a bad grade because they just had no idea it was wrong, students can now say, "Oh, this is wrong. Let me improve it. Oh yeah. Let me review this material in order for me to get better at it." And now we can actually get an artifact, we can get a submission that allows the students to demonstrate the mastery instead of not knowing what they're submitting is right or wrong.
[00:17:15] Sam: So going back to that other side of what you do and what the product does, you mentioned how cheating is a big issue especially with the rise of generative AI. How do you approach that issue?
[00:17:31] Joe: Our approach is really interesting when it comes to this academic integrity and preventing of cheating. Do you know about CAPTCHA? That thing that they say, " are you a robot" when you go to a website or something like that?
[00:17:43] Sam: Yeah. Like all the zebra crossings and "are there cars in this picture?"
[00:17:49] Joe: yeah, are there, cars there? What's going on? There's a lot of people that my grandma or someone looked at and said " why is this thing asking me if I'm a robot? How does it know I'm not a robot?" People wonder that. And really what it's doing, and this is the approach that we're taking with academic dishonesty, is it's just making sure you're acting like a normal human. Can you complete this task in a way that a normal human would? And so it's seeing that your mouse is moving and you're naturally clicking things in a certain way that isn't a programmatic pattern. Programs will do things very fast, very logical, in a certain order. Humans will do things in some random human order almost.
And so we look at this kind of plagiarism, academic integrity issue in the same way. And so our coding environment tracks everything the student does, all the key strokes that they type in, how they type it in. Now we know the speed that they're typing things in. We know when they paste something versus type something in and when we look at the entire class's data, We can normalize what is normal behavior for a specific assignment.
And of course, every assignment's different, right? So we have to do this analysis on a specific assignment. And every class might be different. The class, and this is not making fun of a certain institution or anything like that, but a class of students at MIT will behave differently probably than a class of students at my alma mater. Rhode Island College. We probably are a little bit different of students, and so we have to take each class a little bit differently.
And so we normalize across how students behaved, and we look for the outliers. And these outliers are most likely the students that are either a) struggling or b) cheating. So one metric we look at, for instance, is time spent. Someone struggling is most likely going to spend a lot more time working on something than somebody that's not. doesn't mean that they're struggling. They could just be somebody that takes more time and they know what they're doing, but it usually is a pretty good indicator. Someone cheating, they most likely won't need to spend as much time, right? If they'll just go in, they have generative AI develop their solution. They paste that in and that's it. They get everything correct. And that's just one metric that we look at. And of course, we have tools for this. So we detect these outliers. We have tools for these instructors to investigate. And that's really important.
Some of our competitors, were advertising things like, "Hey, send your students work through this AI detector. We're going to tell you if they cheated with AI." We don't have that approach at all. Instead, we say, " let's observe everything the student does. Let's find these outliers and we're going to let you investigate." So we have tools like compare certain students, these outliers, to the rest of the class's code, see if there's similarity. Compare it to some solutions that we find online, see if there's similarity with the student. Run it through generative AI. See what generative AI produces. Is that solution similar to the student solution? So you can compare what the student submitted to other things out there, like their classmates and stuff online.
Seeing that history, we just have a tool that we let this instructor play back what the student did. So the instructor can just watch the student paste things in, type things in, run code, submit code. we let the instructor observe that entire behavior and we let the instructor make that decision. And then really what we found is having these tools we encourage instructors, show your students these tools. first day of class, say, "look what I can do. After you guys work here, I can actually play back everything that you did. Every time you pasted something in, you type something in, every time you ran your code, I can see those moments in time. Isn't that cool, that I can see everything?" And, knowing that, prevents students.
It's very similar to, why did they put up signs? No speeding zone, or speed camera ahead, or those things that you won't get a ticket if you speed by it, but it says the speed that you're going in a certain neighborhood? It's all things just to keep in the back of your head, "oh wait, someone might be watching, I shouldn't do these things." It's the same thing here with cheating.
[00:21:53] Sam: That's how you're addressing the potential misuse of AI by the users of the platform. Could you share any AI driven features that Coding Rooms currently has in place or is developing within the software to improve the experience for those users?
[00:22:13] Joe: That's the most exciting stuff for me is the things that we're developing baked into the product that use AI. So we mainly have two things that we're looking at with AI helping. One is generating better feedback to students. So we talked about this instant feedback, it's important. We know the quality of that feedback is even more important than just giving it. How can AI help us there?
And the other thing is helping instructors identify these struggling and cheating students with insights. So, can we have AI analyze this information more to provide and boil up insights for instructors? Really, just plain text of, "this student may be doing this, go check it out", so instructors have a better handle on that.
When we first learned about generative AI, beginning of '23, really, everyone's kind of , jumping on it. My vision of the future was this idea that everyone would have a personal teacher or tutor with them at all the time. Similar to how computers and cell phones have allowed us to have a calculator with us all the time, or the internet allows us to have an encyclopedia of information with us all the time, generative AI is going to allow us to have an expert with us all the time. Someone we can ask questions to and lean on and reply back to us very close to how a human would.
So the idea we came up with really first was to have a learning tutor. So we had this experiment about can we have a tutor in the book with you while you're learning that you can lean on at any point in time? Because that's one of the challenges of when you're outside of the classroom, right? There's learning that happens outside the classroom. Your teacher's not there. You can't. Put your teacher in your pocket and bring them home and ask them questions. You only have that opportunity in the classroom. But can AI help with that?
So Khan Academy has something very similar, so you can look at some other products out there that are doing this AI tutor stuff. Really cool Khan Academies is called Khan Migo. They have some videos, really expresses my vision, what they're looking at. But this AI tutor would know everything on the internet, right? Knows what generative AI knows when you go to chat, GPT and things like that. And then what's cool from our side though, with the digital textbook aspect is it's primary knowledge, even though it knows the internet, is what's in the textbook, what are the things the student's trying to learn?
" I'm AI, I know the internet, I know lots of things. Oh, and I need to help a student with the stuff that's in this textbook", and it absorbs that content of what's in the textbook and so it knows the entire textbook's content and now it can help the student with those learning objectives in the textbook. The student can ask things like maybe read a paragraph in the book and it says, "what the heck did that paragraph mean?" I can ask the AI tutor, "Hey, can you explain this in another way for me? Can you rephrase this paragraph?" Or you read a section in the book or an entire chapter, whatever it may be. And you ask the tutor, "can you quiz me on the material that I just read?" And it can do those kinds of things.
And we're still experimenting with this kind of technology. We haven't released anything to public users at this time, but I really think that this is going to be some cool thing that people can have in the future. This kind of person digital assistant that's with you learning with you, but it's somebody you can lean on because it knows what you're doing. everything that you may want to know.
Now one element we did release, so we haven't released anything about this kind of full tutor knowing the book and helping you through your learning, but one element we did release, that's in a closed beta, it's not available to everyone, but is AI feedback on programming labs.
And so I talked a little bit about, we have instant feedback and it's an important concept. We know that I'm working on specifically, programming environments where students are writing code and doing assignments there. So that does some automatic grading for the instructor, it provides feedback to the students, it's really powerful, transformative, but we hear from students sometimes that the feedback isn't always the best, "I don't know what the feedback meant for me to do."
So if you're a student that's still struggling with that aspect, we wondered if AI could re explain it, just like a teacher would, because that's what the student does, right? So right now, we cover a large majority of the students get automated feedback from the system. With that feedback, they're able to move forward, do better with their assignment. But Maybe there's still a portion that doesn't get it. What do they do next time they see their teacher? And "I ran this, the autograding gave me this as feedback, but I still didn't know what to do with my code."
So we have a system now that when students get feedback, they're prompted. "Hey, do you need more feedback? Do you need more information about what you did wrong? Or how you can make this better?" And they can ask AI and get some hints from AI which knows what the student wrote, it knows what the challenge of the assignment was, it knows what the teacher's solution, even though the student can't see the solution, the AI knows the solution, and with all that information it can nudge the student in the right direction.
[00:27:08] Sam: That idea of a personal tutor tailored to the individual, it is really compelling. It goes back to that point of the feedback, right? It's just incredible how quickly someone would be able to develop in comparison to where we are today. But, in terms of actually getting that AI into the platform, integrating it and achieving that goal, what challenges have you faced in terms of that integration and how are you overcoming them?
[00:27:40] Joe: There are certainly challenges, and I don't think anybody using AI adding it to a software product or even just using in your day to day work, you don't have some challenges. Our big one was we're set out to make sure students aren't cheating and using generative AI really to cheat. So when we add generative AI to the product, we've got to make sure we're not making the problem worse.
For example, we want to make sure that the AI just doesn't give the students the answer. And so for our AI generated feedback, we had to carefully make sure and tell the AI, "your purpose is to never give the student the answer, you're going to help, but you can't allow them to cheat. You can't just give them the answer." And this is really hard to do. So we had a really carefully craft the prompt, that we give the AI. So we essentially give the AI certain prompts to tell it its role. And we explicitly say, "you are a teacher. This is what a teacher's role is in a classroom, and you're going to answer questions of students. When they get things wrong when it comes to our feedback, you're going to look at the feedback, you're going to look at the solution you're going to look at the question, the assignment's prompt or question, and you need to give the student help without giving them the solution." And so we drill the AI to know that rule number one is you never give the student the solution. You just nudge them in the right direction. You say, "Hey, check out. This line in your code and you're hitting this common issue", but it doesn't say "just change line two to this and it will work."
The other thing was really, to be honest with you, when you talk about challenges to getting this in the platform, we did it up skill employees. And I think this is everyone's challenge, so we needed our engineers to work with this AI systems and all this technology is new, really. So we had some people that had some experience, some that had zero experience and we needed to train them. We did have a leader on the team, my friend Sasha that I mentioned was one of the co-founders of Coding Rooms. He really dug into generative AI immediately when ChatGPT was announced he was the one just saying "this is great, we're going to dig into this."
And he was, spending long nights on his own time, just writing different little projects and doing all kinds of things. We, as a larger team at Wiley asked Sasha, "Hey, you're really being a leader here with this. Can you develop some kind of learning materials and work with the engineers and train them on how to do this so that we can have a larger team other than just trying to lean on you to develop all this stuff?" And so he did that, and it was really great learning for the team and essentially any of the team members that wanted to step into this, they were the people that we were going to lean on to help with these other experiments and product features.
[00:30:26] Sam: You mentioned that in your first point around the prompting that you have to give to the system and you don't want to make the problem of academic integrity worse. How are you addressing the issue of prompt hacking, of people trying to game the system?
[00:30:45] Joe: Prompt hacking is a real issue for these AI embedded systems in these software products. For those that don't know, what we're talking about with prompt hacking is someone manipulating the AI system by giving it certain commands or information through the prompt.
So if you've used ChatGPT you know that you're going to go in and you write something to it, you ask it to do something or a question, and from that prompt it generates a response. It's response to you is very driven by that prompt. So sure it's funny when we, you, go on YouTube, if you type in some prompt hacking stuff, and you'll see people doing funny things with ChatGPT and getting the system to do things that it's not supposed to do, or information it's not supposed to give you, or fighting with it to get it to do certain things.
But what happens when your main goal, like my main goal, is to have a product that users pay for, and have an AI system in it that users are really expecting to behave a certain way, or we're advertising it's going to behave a certain way. We don't want someone to do some prompt hacking, that's a bad thing. That's not a good look for us.
So, really what we ended up doing, and we did a number of experiments to figure out the best ways to prevent this, and it's really hard because Any new information you feed the AI can start to make it hallucinate and go off on a new path. So even if you set it out with certain goals and objectives, if you continue to poke at it enough, you can change its goals and objectives. So we address this by limiting the ways that students can interact with the AI. And so everything that we have out there in the wild right now, we limit those interactions. That's one of the reasons why we don't have the full tutor out just yet, because we don't know the best ways to limit those interactions yet.
So for the AI feedback on the assignments. What we did is we instead of providing students with a text box where they can type whatever they want and ask the AI anything that they want. We instead just provided two buttons. Feedback was generated by our system as normal that we've been doing for years. But then an AI help thing comes up and it says, "Would you like us to explain this a different way? Would you like the AI to provide you with more details?" And we have a couple other prompts that may come up, but it's very specific prompts that students can click on, and we feed it to the AI. And then the AI, based off of those things, will generate some new information for the student.
Does this always work perfect? Also, no. If the student just keeps clicking these things, they'll end up getting some of the same responses from the AI , because there's only so many ways you can re say a sentence or re explain information. But, we have found so far with the research we've done that students have said, "Oh, that was pretty helpful. There was something that I didn't get and by having the AI re explain it a couple times to me, I was finally able to understand where I was going wrong."
[00:33:28] Sam: You mentioned there the possibility that the AI could hallucinate and go off track, and it has this whole world of data and knowledge that it could potentially tap into that's completely unrelated to the subject at hand. How do you balance the benefits and drawbacks of using general purpose AI versus AI that's trained on a limited knowledge base?
[00:33:59] Joe: This is going to be a really important area of research when it comes to leveraging AI technology. In my eyes, all the research right now points to, and all the stuff that we're trying to use and across all products that you're using AI in. Everyone's using a general purpose AI. It essentially has the knowledge of the internet, and we've seen that it having just all this general knowledge, it makes it better at most tasks. You can give it lots of different tasks, and it makes it pretty, pretty good at those tasks. And this is because the more context and knowledge you have about different things, the more ways you can apply all kinds of knowledge to new situations. Just like you think of with normal human knowledge, right? Someone that, knows lots of different things can apply all those different pieces of knowledge to different situations.
However, I think there's a balance. And so what we've seen at Wiley and what we're really excited about with Coding Rooms and Wiley's products is if we can give this AI very specific content, like all the information in the textbook, and we tell it "what we want you to remember is that you primarily know and care about this content that's in this book. That is going to be your general knowledge."
It might be able to help solve more targeted problems better, for instance, in our case of a tutor and providing feedback, it needs to know that its context is mainly around this certain knowledge base of this certain amount of information. It's definitely not my job as more of a general software engineer for the platform and leading the software engineers, but people working on the AI systems, I'm really curious to see how they're going to help us solve those problems, how they're going to help us feed the AI some more specific information while keeping its base knowledge general about all things.
[00:35:54] Sam: I'm really interested to understand a little bit more about the use of AI internally, right? So looking at those internal operations, are there any tasks or processes where you've used the technology to help advance the goals of the business?
[00:36:09] Joe: Absolutely. We've deployed AI even more with internal processes, to be honest, versus the user facing products that our customers interact with on a daily basis. So internally we're doing even more experiments.
Obviously one of the biggest tasks of a publishing company like Wiley is creating content. But publishing content involves lots and lots of people. When I first got involved with the company, not being from a bigger background in publishing, I was surprised at how many people it really takes to do editing, reviewing, and then especially when you have digital content and you have to worry about all this accessibility compliance and other technical aspects of having that content online for people to access and accessing it through a digital means. So these review processes are incredibly manual. Throughout history, probably the 200 years that Wiley's been around, you have people manually reviewing these things. And I really think aI is helping, and we've seen that. I think everyone would say, wow, AI can definitely help us in this area.
And so we have AI reviewing our material, looking for accessibility issues, for example. Obviously I'm going to focus on the digital products more because that's where I work. But looking for something like, let's say there's images in the books. Of course, we need to have alt text. Alt text is how a screen reader, if someone was visually impaired, could describe the image to them. That's one way alt text is used and instead of just finding those things, what's cool about generative AI is it can generate a suggestion. It can analyze the image for us and say, this is how I would describe this image. And then an author can just go in and tweak it. So instead of the author spending hours looking at the image, writing a description for it, having someone else review it, getting feedback, going back and forth, we have generative AI doing that first step. And now the author just goes and reviews what the generative AI has produced and makes their edits, make additions, subtractions, whatever it may be.
We've seen this also with pure new content generation. We hire, Wiley specifically, hires so many people, hundreds of people, to generate exam questions every year. And then, out of those hundred people that generate certain exam questions, we have another hundred, twenty, fifty, whatever it may be, people that end up just reviewing those and pick the best. Imagine if AI could just do all that generation. They could generate all those exam questions just by taking in Wiley's content. "Here's Wiley content on cyber security. Generate different exam questions for us", and then authors, content people, focus on just review and refinement.
Specifically when it comes to software engineering too, my team has started to use GitHub Copilot. If you've seen, Microsoft has all their AI products called Copilot. And GitHub Copilot specifically for generating code with AI. It knows all of GitHub and it helps you write code by knowing what other people have written and for solving different problems. And so far my team really loves it. Does it do all the work for them? My software engineer is not doing anything, just AI generating? No, not at all. What they've talked about is how it's making them more efficient.
That's really what we're talking about across all these teams is AI can make you more efficient. Instead of my team going out online when they're starting something new and finding some boilerplate code, we call it, some basic stuff, pasting it in and then working off of it, they just ask AI to generate that stuff for them now. "I need to do X, Y, Z. Can you generate some code for me that gets me started doing that?" And that's what it's really good at, it generates that boilerplate and from there they can start to do the creative aspect of the code, the elegant solution that we're going to have as part of our system. It also helps them summarize their code, document their code, submit code for review with some great descriptions, so it's really helped their efficiency.
[00:40:05] Sam: Internally, what have been the principal concerns regarding the use of AI? What are the risks?
[00:40:12] Joe: The biggest risk, what Wiley and I'm sure every business is worried about was putting your intellectual property into this thing. This thing's taking data, right? We have to feed it information. And so it's going to steal our intellectual property. Wiley has 200 years of history of really great content that people pay for, and we're just going to give it to this thing, and then it's going to be out there for the world and whatever companies and they're going to have it, and now their AI is going to be smarter because it has Wiley stuff or whatever it may be, but it's a serious concern.
Same thing with the code, right? This is why we were so slow to adopt GitHub Copilot. You'll be saying, "Oh, Joe loves AI. He's talking about that. Why is his team only just starting with co pilot recently?" And it was because we had to take it very seriously before we put all our entire code base, which is closed source, right?
Nobody gets to see our code other than people that work at the company. We had to be careful about what we were giving this AI and what it had access to. And so there's still concerns, of course, and we need to be worrying about it. We need to be concerned about it. We make sure that we're using the AI technology in all the right ways.
And that's Wiley's concern as a larger corporation. I'm sure every everybody's concern.
[00:41:23] Sam: You say you're making sure you're using it in the right way. Are there any internal policies or budgets that govern the use of AI at Wiley?
[00:41:34] Joe: Yeah, absolutely. And that was many of the things we had to wait for. We had to wait for Wiley at large to really develop policies on acceptable use of AI. And the main thing really is- and I encourage every business to do this and to talk to their employees about it- is internal use of AI needs to be company authenticated, purchased AI by the company.
And the main thing is, these free tools are stealing your data. When you use a free tool, it's taking that data and you don't know what they're doing with it or how they're going to use it. in the future. So imagine your company's intellectual property, you're just feeding into this free tool. That's pretty scary for the company. So the reason like my team again with Github Copilot is only starting to use it now is we had to wait until we signed a full agreement with Microsoft for their enterprise agreement.
Many of these tools have enterprise agreements where they're going to keep your data separate. The data that you feed into it when someone signs in to your company's account is going to be kept separately. There's data privacy agreements and things between you and that company , and now it makes you feel much safer as a company that your data is being handled properly through that tool. And so we have that with Microsoft, we have the enterprise agreements with OpenAI, with the tools that we're using, the other things that we're using and all that is super important to have in place before you just start going all in with AI at work.
[00:42:58] Sam: So that's the hard controls that are in place. What about the softer side? How do you cultivate an internal culture of learning around AI? It's evolving so quickly and things are changing, so how do you encourage that within the business?
[00:43:16] Joe: Yeah, that's really my favorite part. One of the really great things I thought is in, the beginning of 2023, where this was starting to blow up more in the news, we created an AI channel on Slack across the org. So this isn't just my small Coding Rooms team even, this is a Slack we share with a larger piece of the org. We post different articles there, we share information, we say, "hey, this is a little project I was building with AI, check out this video", we really share with each other, comment each other's posts, and it creates that culture of learning and sharing. And I think that culture of sharing leads to the culture of others wanting to learn. And with all the news that comes out, it's hard for one person to really keep up with everything, so it's great to crowdsource that information.
[00:44:04] Sam: Yeah, for sure, it's like a fire hose of information right now, so anywhere you can get curated content, especially from people that are trying to solve similar problems, I guess it's a huge benefit. Looking ahead, how do you see the rise of AI impacting the future of work?
[00:44:22] Joe: I am so excited for how AI is going to impact our lives and the future of work. It's going to continue to allow us to focus on what humans do best. That's really what I like to think about. We're good at problem solving, deep thinking. We're good at the emotional and the personal aspects of work. AI is going to continue to make us more effective at our jobs so we can focus on those things. When it comes to our goals, we can hopefully achieve our goals more effectively by allowing us to focus on what matters at being a human.
[00:44:55] Sam: What do you say to people that maybe concerned about how this technology is going to replace us? Specifically, let's look at the Coding Rooms platform, what impact is that going to have on the traditional teaching model? Is this ultimately going to replace teachers?
[00:45:14] Joe: That's obviously the biggest concern. And we've had that even for years now with the instant feedback. Some teachers will say to me, "if the system provides feedback to my students, what's my role?" and I talk to them about how there's so much more that they can do as a teacher. And there's so much more color that they can add to the classroom when they don't have to spend time on addressing every individual student, every time they have a question.
So right now across the country, we see that teachers are burnt out. We have more people leaving the teaching profession than there are entering the profession. And why is that? Because to be a great teacher, you need to spend all day teaching, then you go home, maybe you have dinner, and then you gotta spend all night grading, answering emails, helping students outside of class. Pretty much teachers Monday through Friday and a lot in the weekend are working all day long, all night long. And then the summers, yeah, we like to say, "oh, teachers get summers off, they have these extra vacations." Guess what? They have no time to write the curriculum or what they're going to be doing in the classroom other than when they're on vacation weeks. And during the summertime. So they're busy all day and teachers need to be more efficient. We need to help teachers in any way that we can to give them as much time. And so AI is going to help teachers. That's, I strongly believe that.
And yeah, the teaching model will change just like how the internet's changed the teaching model, right? We now have more asynchronous learning opportunities. a hundred years ago , you only probably learned if you could read it in a book or when you were standing in front of your teacher lecturing. That was it. Now we have so many opportunities for asynchronous learning. Teachers can assign YouTube videos for you to watch, other learning activities for you to do outside of the classroom. So much you can do, and then of course with the Coding Rooms system and lots of the products we have at Wiley, you can get that instant feedback without your teacher standing there.
What's the primary role of the teacher? Are they supposed to be the gatekeeper of knowledge, like it was a hundred years ago? I'd like to think not, and I'd like to think that AI is going to help those teachers with all the things that they need help with. And what's the teacher going to be? They're going to be that human connection in the classroom. They're going to be the spark that inspires students still. They're going to be a connection that a student needs when they're not getting it from the other systems. That human connection where the teacher can look in the student's eyes and understand how they need to know it. We will still always need that.
[00:47:50] Sam: What about those more junior employees, people who are just entering the workforce, maybe don't have skills and experience and the work that AI is going to replace could be an excellent ground for them to develop those skills. So what does the rise of AI mean for those junior employees with the potential loss of entry level or low skilled work?
[00:48:19] Joe: This is what I would say is probably your primary focus on this podcast. AI is not a threat here. And that's what everyone's concern is right. Everything that everyone's worried about is, "Oh my God, it's gonna take my job." I love studying technology throughout history and people have been saying that forever from a man first picking up a stick , through the industrial revolution. "Oh my God, technology is stealing our jobs." Yet here we are still a society of humans.
So skills and what's considered entry level has changed throughout all of human history and AI is going to be another big change for that. So I think we'll have a new fleet of junior employees that will have some different skills and they'll be able to help with different tasks. I know, as I think everyone that's listening can probably agree, you'll have junior employees, new employees, interns, that array of professionals. And sometimes you have a lot going on and you go, "I wish I could just assign this to those people, but they don't have the skills yet." Well, maybe those tasks can now be assigned to those people because they'll have some tools.
Many fear AI replacing these basic skills and it'll be the demise of mankind. But, again, just lean on history. For example, modern carpenters use all power tools When you see someone working on your neighbor's roof you see those workers, do you see them with a hammer and a nail? No, you don't. You see them with pneumatic nail guns. Just nailing those things in and they get that roof done fast. Now in a day, they strip the roof and they have the new shingles on in no time. Same with power saws, right? You don't see anyone hand sawing wood all the time. They just come with a power saw, get it done, it's cut.
AI is going to make you faster at doing your job just like power tools can make you faster at cutting a piece of wood. I'm not dismissing the skills that AI is gonna quote unquote replace because just like with great carpenters that know how to use hand tools, those true craftsmen that can get some detailed work done because they know how to use every hand tool and do things the old school way. We're going to have lots of people that are interested in learning those fundamentals. And because they know those fundamentals, they're going to be able to do some things and they're going to stand out in some ways versus other employees. But that doesn't mean that everyone needs to know how to work without AI in the future, right? Probably the majority of people will just use AI to accomplish some of the more basic things and get them started, just like how the majority of carpenters use power tools.
[00:50:50] Sam: So what are those skills then that become critical as AI becomes more prevalent in our day to day workplaces?
[00:50:59] Joe: So obviously, number one is you've got to learn how to use this generative AI stuff. So everybody that's listening, that's been avoiding, playing around with ChatGPT, you've got to start playing around with it. And it all boils down to a couple things.
First, technology literacy. Those that know how to use the technology will be more efficient with that technology at getting their job done. And those that are less efficient are probably the ones that don't know how to use those tools. And this is nothing new. We've been seeing this same thing when people didn't want to learn how to use the internet, didn't want to learn how to use a word processor. And over time that goes away, but it's a struggle. And ultimately that boils down to- what's technology literacy? really just learning how to use a tool- and that boils down to being a lifelong learner. And above all, and what I look for in new employees and everyone is somebody that's willing to learn, somebody that wants to continue to learn, that's hungry to learn. So are you willing to grow and learn? If you are, you're going to be interested in learning about, new technology, AI, and how it can make you better at your job.
Why is this more prevalent now? You know what I mean? Why are we even more concerned about it than probably the past if we look at history? Well, it's because, technology changes every year every couple of years. We've seen just in a year's span, generative AI change completely and become so much better in just a year's span. This is versus where we would see technology change every hundred years, 50 years, 20 years, whatever it may be. You could sit there for 20 years, 30 years in a job and not have to change anything about what you do. Today, it's not really that way.
[00:52:31] Sam: Yeah, that rate of change is only accelerating. How quickly we moved from an agrarian economy to an industrial economy to- if you look at countries like the UK- now a service economy. Each change has accelerated. So hey, our great grandparents could be working in a field and a new piece of technology comes along but they don't instantly lose their jobs, they don't instantly have to re-skill massively, they're not all moving to the big city right away; it's over a period of many decades and certainly a career for many of them. So the idea that things are going to change that quickly just doesn't exist. And now, every other day almost there's this announcement of some new layer, some new tool, something else that's gonna change the way we work. Perhaps some of it's scaremongering, but the general trend is certainly in that direction.
Just to round things out, Joe, looking ahead, can you share your vision for the long term impact of AI on education and how Coding Rooms aims to be a part of that transformation?
[00:53:41] Joe: AI is changing education. Right now, it's changing education. We have seen with the surveys that they've done with students in higher education, college, universities, that the large majority of them- over 75 percent of students- have been using AI to get their school work done. So my goal, Coding Rooms goal, Wiley's goal is to make sure that as AI changes education, it changes it for the better. My hope is that the long term impact will be that people really get to focus on being people, that we can continue to connect with each other, that we have more time to connect with each other because we're more effective at all these other things that are in our life. And that AI just helps us be more efficient. This includes learning. Learning needs to be more efficient. We're going to make sure that students and teachers have the tools that they need to be their best selves.
[00:54:40] Sam: Incredible. Looking forward to seeing you do it. Joe, thank you so much for coming on. I really appreciate your time.
[00:54:47] Joe: Thank you.