Alright, we've got one more game-oriented lecture coming up, and it's given by Andrew Wilmot of Maxis about The Sims 2, which should provide a pretty interesting contrast to the previous games, which were all shooters about shooting things and stuff. So go for it. Thanks so much. This was quite a different game, and it's also by far the most content-heavy game I've ever worked on, so what I'm going to do is mainly give kind of a breadth-first overview of the entire process, and just try and hit as many different parts of the puzzle as possible rather than drilling down deep on anything too much in particular. So in questions at the end, I'll be happy to go into detail on some stuff, but the main theme of this talk is just to try and give an overview of how you put together what was really a pretty massive game. So yeah, why is Sims 2 interesting? A lot of animation. Like some of the other talks here, a lot of sound, surprisingly. Massive amounts of people. We had at least probably 250, maybe 300 people touch this project. That's not all at once, but still. It's interesting from the point of view that the users are our level designers. This is completely different from the normal FPS process where you have pretty much the whole point of your content is to produce levels. So our users are designing the levels, we're just giving them the content. And our simulator is kind of interesting in that the whole thing is driven by a visual scripting language. So the Sims 2 was a long project. We started in late 2000. For a long time it was just a pretty small research team basically doing constant demos and constantly losing people or gaining people from the rest of the studio due to other projects going on. We are in full production for about two or three years. A lot of people at the end, as I said. And we had a pretty big slip. We were trying very hard to hit a particular deadline and we missed it. So there's also some interesting stuff there. The downside of that for us was that we had an extended crunch period. These are. . . I'm gonna give more stats as we go along through each particular section, but these are like the main ones. For me, the 11,000 shipped animations is the really big one. A lot of, you know, the other kinds of data that go into a game, but the 1. 1 million lines of code quoted there is no blank lines and no comments. I'm just gonna quickly flip through a couple of diagrams which you're not expected to memorize. Just roughly showing what the process was on Sims 2 so that as I go through the sections a bit later, you have something to hang your thoughts on. This is basically how our art and scripting, object scripter process kind of went through the pipeline and produced mostly either scripts or 3D or 2D resources. And on the other side, we had code picking up these resources. And our architecture was basically that we had kind of a platform, a game-neutral rendering engine, and then a diverse bunch of components basically making up the application layer. Mostly driven just by object scripts, text scripts, and binary resources. A lot is like a saved house. So save file. So I'm gonna run through, like, a few of the more interesting areas of the Sims 2 process, and at the end I'm gonna look at, like, the lessons we learned and how we're moving forward with some of these lessons. So we're starting with art. In many ways, kind of, Sims 2 is a. . . What we're trying to. . . A lot of what we try and deliver is basically the output of our animators as entertainment to our users. I mean, obviously that's not everything, but it's a significant fraction of it. So you can kind of view a lot of our pipeline as a kind of content delivery system. So these are some stats from art. A lot of models as well as animations. A lot of effects, which we'll get to a bit later. Basically just a lot of data. This is somewhat less interesting UI. I'm actually not gonna talk about UI too much, because it ran pretty smoothly. It was. . . We basically have UI down as a problem at this stage, which contrasts quite heavily to some of the other parts of the game. So we had a lot of artists. Our technical art director likes to tell the story that. . . He comes from Pixar, and he worked on the first Toy Story movie, and the first Toy Story movie actually had fewer animators than we did on this project. So that's kind of giving you an idea of where games are getting compared to earlier computer animation. Our art tools were pretty vanilla. So Maxis had been kind of a Maxis-based studio before this title, and Sims 2 was really the start of the whole Maya transition, and now we're mostly Maya-based. And a lot of that is just because we find it a better tool for animation. Basically we bought over all of the animations from Sims 1, which were in Max format, wrote a complicated pipeline to kind of drive re-rigged animation in Maya by the source animations imported from Max. We didn't use any of this stuff in the end, but it was pretty useful as kind of like an initial guide. The reason we didn't use much of it is just that some of the animations were no longer relevant, and everything that was basically got redone to a much higher kind of level of detail. Like some of the other previous talkers, we used Null a lot for basically all of our UI within Maya, and it did a very good job of that. Our pipeline, again, is reasonably straightforward. One interesting thing we did is about halfway through the project, added direct read support for Photoshop, and what this allowed us to do is have the artists work on all of the layers for, say, a sim or a model in a single Photoshop file, which helped their process a lot. In terms of animation and modeling, basically we went from Maya, we used a EA internal kind of platform neutral export format, and then we had our own asset compiler called Go Disco that converted that into game resources. Now the way this all worked was kind of interesting. The artists basically checked directly into Perforce, and the reason they didn't forget to check into Perforce is that was basically, there were two parts to that step. You checked in your new animation or model or whatever, and then that would trigger the build machine that would then convert that content to the game format, and you'd get an email back telling you if there were any errors or whether the process had gone smoothly. So you weren't done until you got that email back and it was all clear. This whole process also built a big web page with basically a link for every single asset in the game. So you could browse this website and drill down to any particular model and all of the game events that might be in it, or number of vertices, you know, you name it. The Sims 2 skeleton increased in complexity quite a bit over Sims 1X. Sims 1X actually had smooth skinning, which I was initially surprised to find out about. A lot of the extra attention to detail went into the facial morph targets, is where we got a lot of our extra kind of emotion and expressiveness. Unfortunately, it's a bit dark so you can't actually see it by the game, I guess. So the main skeleton had 64 weighted bones that were actually being processed by either the GPU or on CPU, but the skeleton itself had 116. So a lot of these extra bones were for grips, you know, where to put your spectacles, where to put your hat grip for matching up IK and all sorts of things like that. And it was kind of because of all those extra slots and so on that we didn't lock the skeleton in the end. So this was a big focus of ours early on with Sims 2, it was like, you know, finalize the skeleton, get the number of bones and the layout and everything absolutely rigid and then we'll lock it and forget about it. But that basically didn't work because as design evolved, we needed to add, you know, grips, remove grips, modify our skeleton. So instead we had a system whereby all of our animations referenced one master Maya file that contained the skeleton. And we had an indexing system that made it reasonably easy to kind of add bones or even remove them without invalidating all the animations. So modeling, we're actually lucky enough to have a lot of the Sims 1 modeling modelers come through, which is kind of important to Sims because, you know, they developed a particular look in Sims 1 that we were basically trying to amp up and they also had a lot of experience in ways to handle texturing and modeling on an animated character when you have, you know, hundreds of skins maybe. There's a certain amount of process there that has to run more smoothly than with a fewer number. One of the things that went wrong was that the graphics team didn't really give artists either a poly budget or a texture budget. This was very kind of hand-wavy, you know, that the code will take care of it, will either write dynamic simplification. What we actually shipped, did manage to ship with respect to the textures was that basically we dropped mid-levels from all of our textures that are currently being referenced until it all fits within video RAM. But that's a pretty kind of clunky and error-prone process. So I don't think it really worked out. We contracted out our object LODs, which seems to be the thing to do these days. Didn't work as well as you might think because there's a lot of overhead in contracting out our, in terms of getting the external team up to speed on your process. And if you don't get that exactly right, you wind up having to fix every animation or model that comes back. And we had a fair bit of that. So because we had a lot of models, we implemented a bunch of different visualization modes to try and spot kind of bad model usage. So we had textual density visualization so we could try and figure out where we're wasting texture memory. We had a neat little animated model display where basically the content viewer that we used to view our models or animations would repeatedly lurk between, say, a model and then its layout in the texture map. So you could kind of see at a glance where the texture map was going on that particular model. Obviously late in the game, we also had to implement this thing to let the artist switch between the different shader paths in the game. This was kind of because the shader paths were so buggy. We wanted to make sure that they were tuning the lighting and so on to the right thing. Speaking of lighting, this is one of the most well-processed parts of the game. The art team actually produced a quite extensive TDD on the whole lighting design, and we even prototyped it beforehand. We had two lighters during the course of the game, kind of one before slip and one after slip, and we pretty much had dedicated people for that because rather than tuning lighting for a level or something, they were tuning a system that had to try and light the game no matter what the user did, no matter where they placed their lamps or built a house or anything like that. So that was kind of a new challenge for us, and some of it worked, some of it didn't. Just quickly on what we did for level of detail, so this is a lesson in kind of talking about something a lot and not actually sitting down and implementing it. We wound up with static LOD, never formally spec'd. All of our sims had at least one LOD with fewer bones and about half the poly count. And in fact, we don't even switch LOD dynamically within a lot. I didn't quite know where to throw this in. This is sound. Maybe it's art, maybe it's engineering. We had a lot of sound, like some of the previous games. The stat that always gets me is 43,000 Vox samples. And the reason that's kind of mind-blowing is that all of our sims speak in simlish, so we don't even have to localize that. So if you imagine having to localize all those Vox samples, it would just be insane. Lots of footsteps and just a lot of ambient sound kind of triggered off the camera and triggered by various objects and so on. And sound was always, at one point sound was actually two-thirds of our data footprint, using various compression methods. We got that down to a gigabyte. But it's a big part of our game. Okay, design. This was like an early version of The Neighborhood that thankfully was replaced by something much prettier. Design was one of the biggest areas in Sims 2. It was really difficult, basically. You're trying to improve on Sims 1, which is one of the best-selling games of all time, and attract people who haven't played the game before, bring back people who played it for a while and got bored, and not lose all the people who are still obsessed about the original or buying all the expansion packs. Also, we have surveys showing that people play The Sims in very different ways. It's almost an even split between four different styles of play. I can never remember all of them, but it's something like there are people who play the romance game, you know, the actual simulated bit, people who just build houses, people who basically like to torture their Sims, that's a big one, and something else. So we set ourselves a goal about two years out from ship to get a 90-plus Metacritic rating, because this was a big thing at EA at the time, and I think still is. If you can get that rating, then your sales go up by a certain factor of X. And we actually hit that. You know, we're still at 91, I think. So there's all of that. And then there's the pressure that comes from a high-profile title, which means a lot of constant demo pressure. So I don't think the Oddworld guys suffer too much worse than us in terms of like constant demos and having to deal with. . . So while we're not selling to external producers, we are selling to EA executives, and we have to keep them on board with our changes and so on. We have this constant question of, you know, we're throwing new stuff in there, trying to make it compelling. Where do we stop? If we take too long, then we basically have to throw more stuff in, because it's already been too long since the last one. And early on, we also lost a lot of people and a lot of ideas to, well, the expansion of Simpson's 1X. So new gameplay. We started off with movies as the main idea, you know, let people do the machina, video-making thing. Yeah. And we actually came back to that as almost the last idea. We brought on aging, the whole idea of having a sim grow up through its life cycles, having four different age brackets and doing transitions between them. And then we threw another thing onto the pile of generations and genetics, you know, family trees, breed, crossbreed your sims and see what they look like. Horrific, really. Then, big life moments that we were trying to bring in, you know, some of the oomph that other games get kind of out of, you know, you defeat the boss monster and you get some great cut scene. So we were trying to emphasize some big moments in the gameplay. Aspirations. And then Wants and Fears was pretty much the last one, and that's probably, I think, the most compelling new bit of gameplay. So we kind of drew the line at that. So Wants and Fears, basically, and Aspirations, I think, came after the slip. So once we'd slipped, and half of the reason we slipped was because we didn't feel the design was there. We needed extremely tight process control to make sure that we didn't slip again. So we had very strict kind of change review. Anything that was going to be changed about the game had to go through meetings. We dedicated people to particular features, and we basically had SWAT teams to try and bring these new features, design additions to completion without screwing up the currently working part of the game. So in the end, it shipped. That first point is kind of redundant. We just could not have shipped at the original time. We probably couldn't even have put anything into a box. We got enough crucial extra gameplay that the Wants and Fears thing, I think, was crucial, and we also got the chance to finish Engineering, which was the other reason that we slipped. We were way behind on Engineering. And in the end, it was quite a success. So the lesson from that, if you kind of go back at Maxis a year, is never give up, because things were looking pretty bleak at one stage. There's always a cost when you slip. So basically, having to constantly redesign, and we'll revisit this, is always a negative. Design changes are really costly, and all of the postmortems I read about this project were basically talking about how difficult it was to deal with constant change. We also got sucked into the EA Redwood Shores' mothership early 2004 after the slip. I think they wanted to keep a closer eye on us. And there were various kind of negatives that came out of that. So we have a very big incentive to kind of learn how to do this kind of thing better. So object engineering. Probably as important as art, these are the guys who basically do all of the gameplay in The Sims. So the way The Sims works is that there's a simulator running a bunch of objects, you know, objects on various tiles, and each object has a whole series of associated scripts, and is also responsible for sequencing all of the animations that involve that object. So mostly this continued to work out well. I mean, the simulator is a reasonably nice process. We basically have it done in terms of adding objects and so on, you know, we've built up a lot of experience from the expansion packs and so on. There's still some problems with it. The scripting language is pretty simplistic, and often what we're trying to expose to it from the game is reasonably complex. So you wind up with some pretty hairy scripts, which is what I'm referring to in that mismatched line. Another thing that goes on is this is the way we do our blending, as I mentioned before. So the artists basically provide all the source animations, and then it's the object engineer's role to kind of blend those together to produce the final animated result. So you've got an obvious problem there of synchronizing, especially if the OEs aren't sitting right next to the animators, which was our situation. So at least early on, you constantly had object engineers going back to talk to the animators, trying to get them to run through, you know, show them in Maya where certain events were, trying to figure out what they were missing. It was a big process problem. As a result of this, one of the OEs actually, this Skunkworks project, produced this little tool called Clockwork, which basically sucked in all the animations, read all the tags, and provided a little browser, which made it very easy to figure out what the animations were doing and where all the timing events were. So Edith is the visual scripting language that drives all of this. And actually, one of the main learning experiences from The Sims 2, and probably the previous expansion packs too, is that for this situation anyway, visual scripting just does not work very well. You give up an awful lot for a nice visual scripting language. There's no revision history in Perforce, no good search and replace, all kinds of things you just take for granted with script, text scripting languages, you lose. And that becomes pretty frustrating. So I did a little survey at the end of the project, and pretty much, well, to a man, all OEs wanted to move away from it. The one good thing about a visual scripting language is that it pretty much enforces having a good debugger, although I gather it was broken for a good part of the time. So we even tried to use Edith a bit on Sim City 4, of all things, which didn't work out at all well. So our studio can say this now, except we're basically going to ditch Edith and attempt to use Lua in the same capacity. Engineering. We had about 28 engineers at peak, starting off with about 5 or 10 for the first few years. So as I said before, 1. 1 million lines of code. We have quite a lot of shared code at Maxis. There's this framework code that has been shared, I believe started with SimCopter, and has been used in all sorts of titles since then. SimCity 3000, SimCity 4, Sims Online, the original Sims, all of the UI code came from this framework. And we also had a lot of engineering-generated scripts, mostly materials, but also things like cameras and catalog and light tuning. We had one dedicated engineer to basically what we call the WorldDB, which is the world representation, so the train and so on. And that was our main link between gameplay and the engine. There was a lot of talk early on in Sims 2 about moving away from the tile-based system that it subsists on. And quite frankly, we go through this with every new title, and it's the same with SimCity. All of our games are kind of tile-based. Aesthetically, you'd like to move away from that. The reason that we stay with tile-based systems is just UI and kind of gameplay. We would have to drastically rewrite the simulator to be able to move away from that. This actually gave us a problem later on, because we had this nice generalized WorldDB code, and it was quite hard to make it efficient in terms of a tile-based structure. So routing was one of the biggest complaints about Sims 1. You had this party syndrome where you'd have 20 people in a lot, all trying to get somewhere at some time, and basically stacking up. Early on, we contracted a company to write a replacement for the Sims 1 router, but the results weren't that fantastic. And even if they had been, there was still a problem in that the router is really essential to gameplay in The Sims. So it's really something that we needed to own and kind of iterate on with the simulation engineers. So instead, we dedicated an engineer to that, and I think that was a very good decision. That's just a quick list of some of the features in it. Probably what makes it most different from other routers is just some of these, the last two extra features, and also the fact that the routing paths tended to vary a lot based on what the simulator was telling the routing system. So there was very kind of tight coupling there. Animation, pretty standard. Blended animations, two bone IK, and a bunch of extra features to handle certain things. Standard reach is probably the biggest one. We obviously have the problem of Sims walking around and having to pick up cups and put stuff in the microwave and all the rest of it. We also had an effect system, which started off as just a particle system on SimCity 4, and it's now kind of grown into this monster prototyping system whereby, so it's all script-based, hot-loaded, and kind of hierarchically composed. So a lot of the stuff in Sims, the dynamic geometry that you see is almost all the effect system, and that kind of includes things like the thought balloons, a lot of the UI that isn't text-based. So I just have a little example here of, say, the fish. So if you play The Sims, you can buy this little fish tank, and the little fish in the aquarium are all run by this effect system, basically a combination of random walks and colliders and quite elaborate state transitions. We tried to move forward the neighborhood quite a bit on Sims 2. In Sims 1, the neighborhood was a screen capture, essentially, a screen render. We tried to make it much more of like a sandbox in Sims 2, much like the lot. So the hardest technical problem there was just kind of imposterizing lots dynamically on the fly. When you exit from a lot, then we build a little imposter that kind of captures roughly what it looks like, and we also import all of our terrains from SimCity, which made production actually of, you know, a variety of different neighborhoods pretty easy. There's nothing like having one of your other games provide quite an elaborate production tool to speed things up. I talked before about the lighting system and how it had to handle basically a dynamic environment. How we did this is that lighting was room-based. We had code that would break up our buildings into rooms and portals between each room. And then we basically just did the obvious things. The most elaborate part of the system is just that portals can transmit light. So you can see in the left-hand picture here that actually the only light that's being specified here is that the artist has, lighting tuner has specified the outdoor lighting conditions, and then the light makes its way in through the portals all the way into that inner room, and a set of fixed function lights is constructed there to kind of light those chairs correctly. So yeah. Early on we decided we absolutely needed over-bright lighting to get, you know, a decent look. That actually bit us later in a non-obvious way. So basically we had a lot of problems when tuning the lighting system, avoiding blow-out, because of the way that windows could be placed almost anywhere and object lights could be placed almost anywhere, preventing a sim from basically blowing out completely to white or a nearby bed was problematic. Shadows are, the only really tricky thing about the shadows was just the sheer number of them. So basically every object has to have a shadow. The terrain and house were actually not done using traditional kind of graphics, projective texturing shadows. There was a fast CPU side algorithm that gave us back a little map that we could look up for any location in the world, you know, is that a shadow depending on the terrain or the house, which suited our target hardware. So our target hardware was very much kind of DX7 level, so we had to keep most of our graphics tricks pretty simple. The other thing we used was these GUOBs, which stand for Generic Under Object Blobs. These are basically just cards like you saw in the ATI demo that we pre-render to look originally like blobs, but after that quite nice, you know, we did my software rendering to get a nice kind of ambient type shadow indoors. This is most important for things like contact shadows. If you have a painting against a wall, it really looks much better if you kind of include the contact shadow that it's casting against that wall helps kind of seat it. So the graphics engine was a curiosity in some ways. It was written as a completely generic engine that, you know, tried to disregard anything to do with gameplay in the Sims engine itself, and it was written as a completely generic scene graph. So the way you assemble your scene is basically by creating normal, you know, Maya style. Your Sim might be a hundred different transform nodes that you've read off disk and you insert that into the scene. If you want to find a node on that Sim, you do a traversal over that sub-branch and look for tags and so on. So basically a lot of caching went on to make this all work properly. And it just seems like a fundamentally bad idea. This kind of thing really belongs in your art pipeline, I would claim, because just the overhead and the extra complexity in dealing with the scene graph, you know, actually in gameplay code was enough that I think it brought down the productivity of the actual game programmers quite a bit. So we had graphics performance problems with this, as you might imagine, and plus just our batch count. So we're rendering a lot with a whole bunch of, you know, hundreds, two hundreds of different objects in it, and each object might have different subsets, and plus you have the walls and the roof and all the rest of it. So this was a problem for us early on, and eventually the solution we turned to was a very old and hoary one. So SimCity uses Dirty Rex. Basically dynamic objects are re-rendered per frame, and anything that we know is static we just hold over from the previous frame. So I thought SimCity would be the last game that ever shipped using that technology, but that's the way it went. We had an initial kind of prototype system just to get working on low-end hardware, Dirty Rex system, and this eventually morphed into, over a number of months, a full SimCity 4-style dual-layer system, and this caused us no end of problems just because it was retrofitted. There were all sorts of subsystems that basically needed a lot of extra code added so that we could identify when things had changed and where they had changed. So some of this came about just because our target platforms were so broad and so low on the minimum ends. So we tried to support actual non-TNL commodity Intel hardware, you know, the 815, 845, and basically the broad hump of the cards we were trying to support, or machines we were trying to support, were machines with DX7-level cards. There's a whole bunch of extra slides and commentary in the proceedings that I encourage you to read if you're interested in this, but given that a lot of this is kind of not of interest going forward, I figured I'd better skip it. Apart from this slide, so game configuration, this is a real headache, targeting that many target platforms. We had a reasonably elaborate system on SimCity to handle all the different possible configurations and driver bugs and all the rest of it, and we basically amped that up by about a factor of two, I think. So our experience has been, I'm sure it's the same elsewhere, is that relying on cap bits or any kind of, relying on the card to tell you what it can do or not is just, does not work. So basically we build a lot of logic into scripts that tell us, you know, what cards can support which and where to set various level of detail parameters. So memory handling, this is a PC title, so as usual we're pretty fast and loose about memory. A lot of STL, very similar to Charles' description of what they were using at Oddworld, almost all vectors, not always efficient, but we can usually pick up cases where it isn't efficient with profilers, and our main obsession with a lot of this stuff is just leak prevention. Because we have to scale the game over across, across a number of platforms, it's much more difficult to kind of hit, you know, we can't just hit a particular memory target, we probably have hit four or five, depending on the different platforms that we're targeting, you know, how much memory the system has, whether it's running a card that's going to require backing from system memory, or the rest of it. We also use ref counting a lot, and interfaces, and similar kind of smart pointers, auto ref count is basically the same kind of smart pointer thing that Charles was talking about. And for anything that really needs efficient allocation, we have custom allocators to do, you know, similar pools, and things like that. We have some interesting observations over the last couple of games that we've shipped. Basically it's reasonably easy to track down leaks these days. We've done a number of code sweeps, and kind of added up the percentages of where we find the problems, and if you want to find a memory leak, just search for a new or delete in a So as a result, we have almost no, we avoid this. We used to have a lot of manual ref counting, we're replacing all of that, because that's another hot spot. Not quite as bad, but the trouble with manual ref counting is that it tends to be very fragile, so we'll come along later and add some code, and break an existing working system. And finally, we avoid the kind of ref count loops by having very kind of solid init and shutdown approaches. So basically be sure to init and shutdown all classes, and it's absolutely a requirement that a shutdown releases all member auto-ref pointers. I thought this would be interesting. In terms of the biggest leak that we had during finaling, SimCity 4 was actually the Lua garbage collection, which I find kind of ironic. There was a race condition in the Lua code that led to massive memory leaks in a particular situation. In Sims 2, we discovered in the last couple of weeks that someone had left on a particular logging system, and that basically that log data was all being accumulated in a big buffer. Another of the interesting observations from Sims 2 is that we, although we got well back from this, you know, by the time we shipped, we ran into situations as we entered finaling where we were actually running out of virtual memory space. So PC games have long used virtual memory as kind of like a crutch to avoid having to be too careful about the amount of memory that you use. But that kind of free lunch is running out. Basically that address space is getting smaller and smaller, depending on DLs and things like that. So I'll just give you an example. This is a little snapshot of the Sims 2. This is actually after we solved our big leaks that were causing us to run out of virtual memory. But you can still see that big blue band in the middle is all of the allocation from just our, basically the engine and the scene graph and the first layer of game code over top of that. And we have a lot of allocation churn from that, and as a result, a lot of kind of space that we've touched and then released again. So resource management. I'm not going to talk about this too much because I think it's been covered a fair bit in previous talks. We have a reasonably kind of conservative key-based system that we use at Maxis. The only real wrinkle that came here is that the initial Sims 2 team, excuse me, weren't really familiar with it, and first of all kind of overused resources, and second of all tried to do an alternate kind of name-based system and wound up kind of hashing strings into 32-bit UIDs. Now, you can kind of get away with that, and we do get away with it on, say, console titles at Maxis where you have a smallish number of resources, 5,000, 10,000 or something. But with our resource count, you're just bound to get collisions, and indeed we did. And we also had the usual kind of custom content headaches of people, you know, someone in India creates a skin, someone in England creates a skin, and then they upload their skins and someone in America downloads them both, and you have to be careful about resource collisions. So we basically brute-forced both of these problems. Changing the instance ID to 64 bits, basically that solution came about just because we'd painted ourselves into a corner, you know, with any amount of kind of simple straightforward planning at the beginning of the project, we could have avoided the situation, but we wound up very late having to deal with these collisions, and in the end making that kind of brute-force change was the easiest thing to do. Configuration management. We're starting to get quite a big configuration management team. It seemed quite similar to what Chris was talking about earlier with Bungie. We had like six to eight engineers on our configuration management team, kind of shared between Sims 2 and a few other projects. We have a system at Maxis where we test the game by kind of running commands through a command console, which has proven very effective. So basically we can write a whole bunch of unit tests that run the game through its paces and then check for certain conditions, and, you know, once we build up a library of tests like that, we can be running them constantly, week in and week out. Before each check-in, even early on in development, you're required to run a simple sniff test. To basically make sure that the game was reasonably functional. We used DevTrack. We had our own custom tool for rolling out builds. So basically it turns out to be a bad idea to roll out development builds where you have hundreds of thousands of files incrementally. It's a bad idea for the network. It's a bad idea for File. io. You probably would have been better off just releasing zip files of daily builds. We had a similar facility discussed before here where if the game crashed, it would send off a stack dump and all sorts of other annotations about the simulator and the animation system, and then these would be gathered and kind of published on a website. So you could kind of go to that website on a particular day and see, you know, the most common crashes and follow the links and actually get the place in the code where it crashed. We had quite an elaborate source control set up. So we actually had two branches. The engineers worked on this DevLan branch, and that actually went through a testing cycle before it was then integrated across the main line. The main line was also then tested before that was in turn released through this Robby tool to production and art. So it turns out this is just way too much process for a game except when you're on the last few months, and one of the common bits of feedback after the game shipped was basically there was way too much turnaround between, say, an engineer checking in a feature and an artist or a scripter seeing the result. It could be up to a week, so pretty bad. The other problem was just that I forgot to mention on the other slide is that our publishing tool had no rollback facility. So if we did, if in spite of all this process, we did manage to publish a bad build that was broken in some way, that would basically stop people for a day or so. So lessons learned. So everything was kind of nice and peaceful after shipping, but at some point you have to, like, sit down and figure out what you can change. So the thing that went right was basically art and content like that. We had all sorts of the usual process problems, but when it came down to it, we could always hit art block. And one of the things here is just that we have a lot of crossover with the animation industry these days. Like I said, our technical art director was from Pixar and was used to setting up a lot of the pipeline systems that we use. And we also are hiring from the animation industry now. And they've been dealing with these problems for quite a while. The things that went wrong in art are basically going back to design again. There were a lot of late changes to design. We had, I think at one stage, two or three art directors, and also executives and even producers would throw in commentary about how something should look. So there was kind of a lot of trashing going on there. There were some communication issues, the usual thing with a team that big. The tighter code turnaround that I was talking about, and just generally there wasn't a lot of communication between the engineering team and art, apart from a few pathways. The kind of person we're having trouble hiring is kind of technical art types. So we're starting to see this need for the kind of technical director role from animation again, someone who's kind of an artist, but is also very good at scripting or whipping up little useful pipeline tools. This was probably our biggest lesson. It's just, the later you make a design change, especially if it's a reasonably big one, the bigger the cost to engineering, art, production, just everyone. So we were kind of between a rock and a hard place because we had to get that design right. And it wasn't right. You know, Maxis has been through this before with the Simsville title that never shipped. The reason it never shipped is basically it looked gorgeous, but there was no gameplay there. So one of our biggest issues is basically how do you address that kind of thing, which I'll get to a bit later. Engineering lessons, we had a few. The engineering process was pretty chaotic. So we had a bit of a problem with, you know, a large section of engineering mostly concentrating on the graphics engine and not on the game. In fact, the game got neglected to some extent. We had an ongoing problem, especially early on in development, of not having enough people actually working on the simulator and the various things that were going to be delivering all this new gameplay. It's kind of indicative that the research team that went for two or three years, only about half the time they ever had an engineer actually working on the Sims simulator code base. No one else was working on the graphics engine. Like I mentioned before, a scene graph really belongs in the pipeline. I can go into that in more detail afterwards, but this basically did not work for us particularly well, just because it kind of sucks, well, there's the complexity issue, and then you get into problems with fragility, where you're persisting, say, a scene graph branch that's supposed to be matching, say, your code, your gameplay state, and if those get out of sync, then you can run into all sorts of problems. The resource problem and a few other simple things like that kind of taught us don't ignore, you know, your existing shipping hardened code. Don't think it's a good idea to spend engineering time trying to come up with a better system where you already have a decent one. Don't contract out core gameplay components. That was the lesson from the routing. And from engineering, basically more urgency, part of the result of having some engineers kind of detached from the actual game is that there wasn't a lot of urgency, and that was actually where most of our senior graphics, sorry, our senior engineers kind of fell, so that meant there was kind of a lack of urgency from the top of the team to try and get this thing out the door. This is a transporter accident. We have hundreds of these, usual kind of skinning accident. So I'm going to run through a couple of slides of drilling down onto what things went wrong in engineering, just because it's my area. We had a lot of over-engineering, and I list a completely insane example here, but there are other examples almost this insane. So the concept of a pixel format, you know, ARGB, 5651, sorry, 5551, all the rest of it. Something you'd normally represent by an enum. Well, we had five interfaces, I think, and 2,200 lines of code, just to represent that concept. We had some people who are very enamored of C++ and of COM, and this kind of led to what I call bubble wrap syndrome, which is where you have a reasonably simple kind of core game concept or engine concept that you want to be able to manipulate, but has so much layering over the top of it that it feels like you just can't push through and directly do that thing. We're gradually transitioning to renderware at Maxis, and so far I'm liking that a lot better because it has this kind of toolkit concept, where basically you grab bits of the code and use that directly. So Charles said not to diss C++, but there are some areas of C++ that you just have to. So our math library was written using template metaprogramming, which was all the rage a few years back. This was kind of a common problem on the Sims 2 engineering-wise, which is if people implement new approaches, then they should always profile them and show that they're actually better. So unfortunately no one ever did this, and it turns out that even with this template metaprogramming idea, which is that you actually wind up with more efficient code because you're throwing all of the math instructions at the compiler to do with what it will, we got a decrease in performance over just a normal standard math library because we're basically blowing the compiler's mind. Any compiler is going to give up after a certain amount of inline depth. That actually wasn't the problem. The problem with this approach, which kind of leads to one of these non-obvious things you have to be careful about, is its impact on debug speed. So this approach to math vector library multiplied by a lot the number of function calls in any particular math operation. So if I remember right, a vec4 addition wound up being 20 function calls in debug. That's with asserts off. With asserts on, it was more like 40 or 50. So when we did kind of a reverse patch job and tried to mitigate some of this stuff, we found that the overall impact on debug speed was something like 75% on our core engine routines. So what this is pointing to is just you really have to be careful in the early stages of your project in terms of having people who are experienced and practical because the stuff they do will impact the later programmers immensely. Another problem was just API churn. So because we wound up with these kind of ghettoized, well not ghettoized, but separated engineering teams, one of which was highly dependent on the other but weren't really talking to each other, you have this problem of API churn where there was a number of low-level APIs that were being changed constantly to meet some measure of conceptual purity on the engine side. That led to, you know, the obvious problems you might imagine. So adding this all up, my point of view basically came down to a lack of productivity. At some point you have to ship the game and you have to be careful that you don't attempt too much kind of research-y code generation before doing that. And one of our learning experiences is that we, by kind of taking the conceptually pure approach and trying to do things right, we actually lost a lot of opportunities. So the problem with taking this kind of approach is that you run out of time to basically finish off stuff or even get anything working. So for instance, we have a lot of well-authored normal maps for our sims, but due to the slowness of the whole normal map process, in the last couple of weeks of the project, the decision was made to basically make them only ever show up on 128 meg cart because no one ever actually implemented, you know, any kind of normal map compression or even enough code to be able to drop the level of detail of the normal map. I've been through the static LODs. We dropped our main shader path also in the last couple of weeks for similar reasons. It's just, you have a tight deadline, you've got to keep close to the middle when you're making a lot of these decisions or you just will run out of time. So some of that stuff is what I'm referring to when I say we shipped, because it kind of amazes me that we did sometimes. So that's lessons learned, you know, how we actually, what are we going to change in what we're doing. Like I said, one of our biggest problems was design churn, so what we're doing is learning about pre-production. This is something that other areas of EA have been reasonably big on, but we never have it in Maxis, just because we haven't been in that arena until, say, Sims 2 or maybe The Sims Online, where you have massive teams and basically getting everything through the sausage factory and out the other side is, you really have to do it in order. So the whole purpose of pre-production is that you do all of your research then, all of your prototyping, make as many design decisions as possible that are going to stick solely because every one of those that you do, you're reducing the risk later on. And then, you know, theoretically you go through the slow ramp up to full production, slowly throw more engineers at it, and it all works marvelously. So we'll see how that goes. For a project your size, how many people do you think would be on pre-production? 10 to 20. So why didn't we do this before? Because after all, the early Sims 2 research team was about 5-10, 20-10, it kind of bounced around depending on what other project was going on at the time. Basically we just didn't have the right mindset. We treated it as a research team, and most of that research went into graphics research or very limited new idea research that wasn't actually implemented and tested. So the key is really, don't just make design decisions, get in there and prototype them and, you know, do some game play prototypes and actually play through and see whether it's going to work. In terms of some of our communication issues, you know, these issues are not due to anything particular at Maxis, it's just the problem of having teams that large and getting communication from one place to another. We really kind of have to change the way that we work. So we're trying this kind of pod style of working where we move much more to a kind of a SWAT team approach that we have kind of used in the past. So the goal is to get into these kind of small groups where the iteration turnaround time is really fast and there's not a lot of kind of cross process overhead to slow people down. This is a small example of that kind of thing that happened on The Sims 2. So for the original ship deadline of January 2004, we actually cut swimming pools because we had none of it done and it was going to be quite a bit of work, which was almost unbelievable. Swimming pools are an essential part of The Sims game because of how you kill sims. I mean, you throw them in the pool, you take away the ladder and they die. So we got a skunkworks team together to help save them, driven by a producer, all completely outside the normal process rules. Simplest things we could do, basically just get something in there and get it working, a small number of people, and I think we did a really good job on kind of getting that all in there and getting it working. It was kind of a lesson in terms of how much kind of stepping outside the big meat grinder process and just having a small SWAT team working on a particular feature could work. And of course we added more stuff to that once the slip came down. So we're already doing a lot of art pre-production on next-gen titles and other titles. We even have concept artists being hired to do sketches to try and figure out what our look will be and basically just explore the space of looks. It turns out to be much faster to have an artist do that than to do a whole bunch of Maya renders or even worse, get a game engine up and running and do things like that. A big success from Sims 2 art-wise was probably the auto-content build. Once that was up and running it worked very smoothly and given the sheer amount of models and animations that went through that system, I think it did a good job of getting everything in the right place at the right time. Engineering, basically the lesson is just keep it simple. We way overdid the complexity on Sims 2 and that was pretty, actually pretty much localized to that team. I've had experience on other teams at Maxis and they've been much more practical and much more focused. So basically the simpler you can keep the base level things, the more complexity you can build on top of that. I'm sure I'm not telling anyone anything particularly new here. And basically, yeah, rapid iteration is, as some of the previous speakers have said, is absolutely the key to game development. Technology-wise, we're kind of moving to this commodity rendering system. As someone said to me, the great thing about it is that it's something that everyone can hate kind of together, external. We're adopting the effect system that kind of worked out pretty well for authoring large amounts of dynamic content and complicated content and also as a prototyping tool. We're using that quite a bit on current pre-production systems to kind of figure out what something will look like. We're switching much more to text-based scripting, as I said before, and we had a kind of game object asset database on Sims 2, but it was kind of, it was a carryover from Sims 1, and it wasn't the best. So we actually have a team working on building a proper kind of production hardware system. And that's pretty much it. These are some of the people at Maxis who were helpful in giving me advice and statistics and all the rest of it. All right, thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. So who's got questions? All right. Go ahead and start, and I will bring the mic. So I have a couple of questions for you, Andrew. Because you have such a tremendous amount of followers, your topic is growing even more larger. I think you need to be on the news too. Yeah. I guess the first one I'd start with is, why was your content building system centralized? Like, why was it important for your artists to build their content on the server rather than for them to actually be building the content on their machine? Like why does an artist need to check something in in order to validate it? That was something I didn't quite understand. They could validate it locally. So this is just art that is actually going into the game, you know, into the next build, daily build. Locally, they would be developing this model animation that hit a preview button, and it would kind of do an instant export then and fire up the remote, the, sorry, local viewer. We had a very lightweight kind of viewer application, and basically I'd be able to preview it in there. That was not the same as actually putting it in the game for a preview. Right. You don't need to put it in the game for a preview of the web through the portals and the portals in the middle. Yes. The content building is the same. Yeah. Same code. At the beginning, there was a lot of talk about the importance of the tech lead and kind of vetting decisions. I'm wondering how this comm system got built and your templates got built and all these, how these things lasted more than a week or two in development. Yeah, I have to be somewhat tactful here. You've got to have good tech leads. That's what it comes down to. These were things that were pushed on you by, like I noticed the outsourcing was pushed on you. Were these other things pushed on you by EA or other people? You're right. They shouldn't have lasted more than a few weeks. I mean, there were obviously bad things at the time. Unfortunately, the people who pushed them basically had the support of upper level engineering management, who weren't really used to, had not had engineering experience before. So, you talked about the third direct scheme for optimizing graphics. How much do you think the scene graph was a factor in having to go to that scheme? Oh, you see, it was, I think it was a large factor. It's not the only factor. We would have had to do something else on low-end machines, because we were definitely up against the batch count, but there are a number of other things we could have done there. Accumulating objects, well, A, better LOD, and B, accumulating objects into multiple buffers, and all sorts of things. This question may not be relevant, so please feel free not to answer it, but you mentioned twice that the toolkit approach, you know, you thought was something really good you wanted. What exactly is that? I mean, rather than providing like a top-level interface that dictates how something is going to be rendered, which was pretty much what we had, you instead get a bunch of utility routines that you're assembling into, you know, a rendering engine. So, one of our problems was that the interface between, you know, our graphics layer and our gameplay layer was far too high. So, there were a lot of things that, say, we wanted to do in the gameplay arena that we couldn't without actually reaching down to the graphics layer and getting changes made. And by the same token, which meant we were kind of limited in what we could do. I had one other thought, but I forgot that. After the slip, was there expectation to up the target that you were going for, rather than a DX7 to a DX8 kind of platform? Because you were slipping by about seven, eight months. Yeah. No. And I think the reason for that is that we were initially only slipping five or six months, and then we slipped again. But it's not a slip if no one outside hears about it. Yeah, but the pressure wasn't actually on at that point, because at the time where we could actually have done anything about this, and I'll point out, we did ship with a DX9 path. The reason that we cut the DX8 path was our system was too inefficient to handle the massive – you need a lot of different shaders to be generated under that kind of path, as Chris was talking about earlier, and that killed us. But at the time, there weren't a lot of titles out there that were even using DX8 cards well on the PC. Was there some guy whose entire job was to do the DX8 path, and then it got cut, or anything like that? It wasn't his entire job. We did actually, towards the end – say for the last year, we had a guy who was pretty much dedicated to material scripting. Up to that point, it had been done by me and another guy. So yeah, that probably was the majority of his job, because all that stuff had to be written in Asm, and a lot of different cases to handle all the different – to pack stuff in. An interesting stat is that the object we had with the largest number of bones was actually the bed, which had 180 bones and obviously had to be split up for the hardware. That's just because there are a lot of different ways to get into a bed, and the blankets have to ruffle up and all the rest of it. It's never the objects that you kind of expect that are the most complicated. Okay, so I'm going to do Ken, and then we're going to be open now to both further questions, and also just if you have statements or you want to say anything about your own development process, we'll kind of pass the mic around. So I wanted to ask you about the structure of the engineering team and how hierarchy worked. Was there someone who sort of ran herd over scheduling in addition to perhaps multiple programming leads? How did that all sort of flesh out? So one of our problems was that it changed a lot. That was probably actually a problem I didn't touch on on the Simstude right throughout the process. I mean, certainly up until the last couple of years, we were constantly changing producer or whoever was running the team as they got sucked off to be thrown on the latest fire on other projects. So for the last couple of years, we had a development director, and the engineering staffing under that, she was directly running the engineering team, like most of the engineering team, and then the leads were kind of over here. And I think one of the things that kind of went wrong is that the leads had a bit too much autonomy and a bit too much separation from the rest of the engineering team. So you think, in your perfect world, would that not happen again? Yeah, in my perfect world, we would have a dedicated engineering manager, and everyone would be below that. How many leads? I mean, a lot of it just depends on the people you have involved. Like the previous title, Simstude 4, we had basically one guy who was pretty much the tech lead, and he was highly competent, and everything pretty much just worked. How many leads were there? On Simstude? Yeah. It depends how you classify it. There was a graphics lead. Am I just asking the wrong question here? There was a graphics lead, there was an animation lead, there was kind of a lead engineer who covered everything else but tended to ping-pong into management and back as we went through headcount changes. And so, you know, didn't really get heavily involved. What if I used you as a board, maybe? I'm confused. I'm open for questions. How many of our people are interested? We do have a whiteboard if you want to do that. We understand if you don't, but, you know. So you're after, like, the engineering structure? Yeah. Yeah. Okay. So towards the end— A little bit in time towards the end, right? Okay. So moment one towards the end, it was basically DD, graphics, animation, and, like, other lead. Development director. Development director. Yeah. So another problem we had then was that our development director, because we were doing a lot of this massive CM stuff for the first time, a lot of the attention was kind of diverted onto CM and managing the daily build. So there wasn't always a lot of attention left over for the engineering process. So DD leads—I mean, the problem here is just that it was confused. I can draw a picture at one stage which basically had most of the engineers under this third lead. Where are we? Graphics, animation, simulation. So, like, this was just before the slip, I think. After the slip, you know, it was obvious that DD was overburdened, so we had more, like, three engineering managers who came in and basically each took a bit of the team down here. And then, unfortunately, we lost one of them six months later. Did people sort of migrate over to the graphics and animation leads? Did those people actually manage? Or were they really just sort of migrating their own stuff? So there's manage and then there's task assignment. Yes. So basically— I don't think I'm asking that question. Okay, so task assignment was basically these guys talking to the DD in some cases, let's say specific graphics tasks. So normal mapping, you know, would say this guy talking to the DD and then assigning tasks to junior programmers, with, say, this guy then asking to see, like, a specification of an interface or things like that. Animation was very nicely compartmentalized, actually. Basically, the reason the animation lead didn't have a lot of direct reports is that he managed to maintain that system pretty much all the way through in close conjunction with, you know, the technical art director, and only at the end had to work with another engineer to implement animation compression and all sorts of things like that. So I think a lot of people under the simulations guy, they were really being passed out by the development director? Yes. Okay. And so— For a while. Who— The answer to this question is not applicable, but, you know, who enforced policy standards across the team? Code policy standards or— Code policy standards. Like, you will deallocate your freaking memory allocation when you clean up your— Initially, when the team was smaller, the graphics lead and the simulation lead did. As the team grew larger and it became obvious that some of these things were bad ideas, everything became much more freeform. I'm not sure I can kind of get into too much detail. I have it here back here. So while I'm waiting for the microphone on the top, so this is sort of a tangent, but something we're trying is a thing called Team Software Process. Carnegie Mellon Software Engineering Institute does it. I don't know if anyone's heard of it, but we ran into a project—I mean, it wasn't nearly as big as Sims, but where it was kind of crazy and out of control. And after that, we wanted to figure out, you know, how could we make it better? And so we tried this thing called Team Software Process where sort of all the team members are responsible for coming up with the schedule themselves, and they're responsible for tracking their own time, and there's sort of time set up for reviews and everything, and there's a lot of cross-team communication, which seemed to help with a lot of these sort of areas where you don't know who's responsible for what, and you don't know if the communication is going across the team or back up or stuff like that. I don't know. So part of the problem here is that this is kind of a chicken and the egg situation. There was a lot of churn here just because it was such a high-pressure product, and as a result, we had a lot of change. You know, if something wasn't working, it would be changed. We lost people, too. It might do me better to kind of diagram the situation on SimCity, which had almost as many engineers. We had like 22 engineers in SimCity at the end, as opposed to about 28 on Sims 2, where we just had a very competent kind of—we called them a DD, but at that point, it was like the Maxis DD, which is kind of more of an engineering manager. So we had a tech lead, and then basically all the engineers were being managed by the engineering manager and being given a large amount of technical direction by the tech leads, but not too much. I think one of the problems with Sims 2 was actually there was too much technical direction. There was too much. Yeah, too much process is what I'm trying to say. Just a follow-up of what we're doing. It's something similar to what you described, and it's called Scrum, and it's more of an agile methodology where a team's self-organized. They break down their own tasks. They create their own estimates, and then they essentially meet on a daily basis to re-estimate remaining tasks or basically burn down on this monthly iteration that they do, and it does remove a lot of burden from lead programmers, managers, for having to break down these tasks and estimate them for other people. Right. Yeah, so the blob wasn't quite as ill-defined as I'm making out here. You're reminding me. We had teams within that who their function was well understood. We had some people working on a particular section of, say, the dirty rec system. Others on effects and lighting. Others on the simulator routing. I'm just unable to give a very clear answer to the overall structure just because it changed recently often. Two kind of silly questions, I guess. Feel free to ignore them. One, could you quantify why the product slipped at one level of more granularity? It was design. It was code. The scene graph screwed us. Is there any way to drill down that one more level to place more blame on the slip? During the talk, it was pretty much 50% design, 50% engineering. The engineering didn't get as much play because it was so obvious. The executive producers were mostly concerned with the design. They all had input into that. But the fact was also we couldn't have made engineering by a considerable amount. The second kind of silly question is, what's the feedback loop here corporate-wise? The game came out and beat forecast. It did slip, which cost money. But kind of screwed up project, yet sells more units than it was forecast to. So where is the. . . And this is kind of a meta question. When does process actually end up screwing us? Because in this case, it costs EA $20 million more, but the game's still going to make $700 million or something insane like that. Right. I actually had a bullet point on one of my slides at some point kind of saying, don't take the wrong listen from this. Because one of the lessons in Sims 2 is kind of you can screw up and survive. And I don't think we can screw up to this extent and continue to survive. So that's why I'm also saying we have to improve our process. I was just going to say, one of the things you mentioned, and there was a question over here about when you're slipping, did they make you think of other graphics cards? And I know every publisher I've ever dealt with, they say, you can slip, but you've got to get me my seven pet peeve features that you weren't going to give me before. And that always ends up almost costing me more than the slip. So we were aware of that problem, yeah. And we were, at least on the engineering side, very careful not to add. . . Basically, every new engineering feature that was going to be added during the slip had to go through a pretty formal process, like an actual TDD and all the rest of it. Yeah, because otherwise you'd just get burned out. Yes, yes. There's also that infamous mythical man month thing about, if you're going to slip, you know, slip a long time. Don't do incremental slips and just. . . You know, the other bad thing about slipping is just, you've had people who were crunching for, say, four or five months during six or seven day weeks, and extending that for another six months. So I was just going to comment on something Chris said about the, like, when is the process thing actually going to hurt? Which is just that I think that it's pretty clear that a lot of times the process thing, when things slip long and that sort of stuff, doesn't actually hurt because there's not, like, sufficient competition in the games that are out there to make that actually hurt you. I mean, if a game like The Sims 2 slips for a year or two years, it doesn't really matter because there aren't, like, competitors coming in and selling to that market aggressively in a way that takes away from it. I mean, that's usually when companies fail because of process things. Because someone comes in and, you know, takes advantage of that. So I would actually slightly disagree with that. I think as these games get bigger and bigger, the cost increases for, like, a number of reasons. One is just marketing. So we lost a lot of money by not hitting that date just because we already had, like, TV ads booked, you know, various things like that. The other is burn rate. You know, the burn rate for slipping two years is phenomenal with a big project like that. I was going to ask a question which might lead to a more general discussion, which is can you talk about the background that your object engineers had? Like, you refer to them as object engineers, which I guess means that they are trained in engineering, but at the same time they're exclusively using a visual tool and getting frustrated with it because they'd rather be using command-line text-based things. So kind of what is their background? Were they engineers and designers? Would you rather have had engineers? Would you rather have had designers? And how would you change the tools just in a general sense to tailor to that? So they weren't, you know, they weren't C++ engineers. They were, although some of them, you know, had that capability, they were gameplay scripters. They had to basically, the way I tend to think about them is just, you know, I've had some experience in the animation industry and that kind of, that TD level glue that holds the whole process together. So, you know, they're taking the assets, you know, there's the engine that's being supplied by the engineers and together with the producers, knitting it all together to make the game out of it. In a game like The Sims where, you know, the gameplay is nicely compartmentalized like that, you know, they're a kind of combination level designer. Yeah. Yeah, basically. All right, well, thank you, Andrew. APPLAUSE