I’ve been hearing really good things about how the event went, and really wish I’d been able to make it this year.
I’ve always enjoyed Jerry’s writing over at Penny Arcade, so I suppose it should come as no surprise that I think he damn near nailed the game industry metaphor when he said this:
The stakes are high, and getting higher, and publishers who were once merely gun-shy are now officially paranoid, rolling around in a padded cell until the drugs take effect. Part of the reason GDC made me uncomfortable is that I could feel its culture pressing on me from all sides, and I knew it wasn’t mine. But the other part was that I got a sense of how brutal that life is, how unstable it can be, how maddening, and I just wanted to come home and match gems or some shit. I didn’t want to see it anymore. I don’t want to think about a cow’s quiet eyes every time I grip a hamburger.
There has been a lot of discussion within academic circles regarding the use of virtual worlds for the purpose of researching various forms of human and communal interaction and formation. Due to the exorbitant current cost of entry in creating an MMORPG (and the fact that they already have a population for the purposes of sampling), it seems like a great deal of the research is occurring within already established games. There’s nothing wrong with this, per se, though there will come a time when the greater degree of control over variables that comes with creating your own environments will likely become necessary. (As a case in point, while they can select which games they choose to sample, researchers tend not to have control over how the game is marketed, nor which demographics it chooses to target.)
There also seems to be a fair bit of focus on contemporary games, like Second Life, and World of Warcraft. While this certainly has merit, especially in that these games reach a certain critical mass, allowing for a greater demographic sampling for research: you are more likely to get not just core gamers, but also casuals with other interests that play as a fad (because “everyone” plays). This can only help the overall direction of research into social dynamics and interaction, and examining the social organism as a whole. However, what I’ve found is very little attention to a return to prior research, prior virtual worlds and experiments.
I think this is incredibly unfortunate. I think there is still a lot of play left in earlier models, such as MUDs (Multi-User Domains/Dungeons, the text-based precursor to the modern MMORPG). Many MUDs at this point have been established for well over a decade, which I think would offer a wealth of opportunities for seeing how a community matures and shifts as it ages. Let’s take AvatarMUD for example, since I have nearly a decade of experience with it. Over the past decade, I’ve seen the population rise to a peak population count of 190 individual players on at a given time, with a median of roughly 120 across the day, to a slow decline as players moved on, where the median is closer to 60, with a daily peak player count of around 90. Even in this, it has survived better than many MUDs.
As the player community has shrunk, so has the sense of community, which could be partially attributed to several design implementations that allowed for greater fragmentation of the player base (in addition to outside factors, such as a shift away from MUDs in general, and the increased availability of broadband allowing for more visually robust games to be played). What is particularly notable is that as the nature of the game evolved, we started adjusting and adapting more and more for “min-max” players, and hardcore players. This came at the cost of the more casual, social player. While I don’t think it is a perfect ratio, I strongly suspect there is at least a passing corollary between the reduction in population, with the prior percentage of casual and social players. What has remained are largely committed players, who have invested hundreds or even thousands of hours into their characters, and generally have considerably more than one alt. They’ve “mastered” the play mechanics of the game, and generally continue to play because of their investment in the game, and the friends they’ve made within the game, rather than continuing to find new challenges.
Due to making these adjustments in order to “keep ahead” of the “hardcore” players, the barrier of entry for new and more socially-oriented players becomes untenable unless they already have friends within the game. This is not unreasonable, since MUDs are largely populated through word of mouth: they are often labors of love, and not even allowed to charge or generate revenue, which means they tend not to have the budget to advertise. It does, however, mean that the truly new player is largely left to fend for themselves, and can become extremely frustrated until they start establishing a rapport and support group among other players. If they aren’t willing or able to devote the time and energy towards that end, that often marks the end of their time on the MUD.
This isn’t meant to be a doom or gloom forecast of things to come with AvatarMUD, and the staff remains receptive to a number of ideas on how to aid the casual player in becoming established, without sacrificing the game mechanics and design path they’re interested in pursuing. It remains to be seen how effective these ideas will prove to be, but that returns me to the point of this essay: MUDs present an opportunity to observe communities further along in the cycle, and their continued use as a sandbox for virtual worlds should not be underestimated.
This week on Gamasutra, Stephen Ford’s article has garnered a fair bit of coverage. Is there something wrong with how business is being conducted in the gaming industry? Perhaps. That said, there are a great many companies doing fantastic business and reliably turning out quality products while increasing profits year-on-year. Regardless of any talk of problems needing correcting, Ford puts forward the idea of small production companies dominating the gaming industry in the future. What he’s suggesting, in essence, is that the games industry should adopt the production model that the film industry currently uses, especially as the average AAA game budget increasingly resembles the average blockbuster film budget.
Let’s take a look at the parallels between his model of game development and the film production model he’s basing it on: say we’re a film production company and we want to make a big comic book movie from an IP that we’ve optioned. So simulatenously now, we’re gonna start making the rounds of the different studios and see about finding one who’s willing to give us the money, we’re hiring a screenwriter to write the script, and we’re bringing on board a director and possibly casting the lead roles, simultaneously negotiating all of the contracts with all of those people. That’s the early stages, the make or break stuff. Boom, we’ve got one studio that’s willing to give us the money, and we found a director and big name to star. Let’s say the director also has a character artist he wants to help shape the look and feel of the film, plus an editor and cinematographer he likes to work with, so in this case we don’t need to find those people. So now we can really get into pre-production, scouting locations as we get our casting director on board (we probably use the same one or two over and over again), and hire the special effects company that we want to use, arrange the rental of all our equipment (possibly from the studio that has given us the money, possibly from one of the many other places in town that do such things, perhaps from several such companies), hire all of our production assistants, assistant directors, set dressers, grips, lighting board ops, sound guys, makeup artists, art assistants and all the rest, arrange transportation and housing and food for everyone, and THEN we can start production.
That’s a lot of fuss, a whole lot of negotiations, and a whole lot of places things can go wrong. Film production is a massive undertaking, even for a modest budget. It’s super expensive, because all those people are contractors. It requires knowing just about everybody in town, and the town’s pretty much gotta be LA or New York. So why is that a good idea?
It all comes down to due diligence. For every step of the way, you want to have the very best people you can possibly hire in each role, and you need to be able to fire them instantaneously if they drop the ball in any way. With a good reputation, you’re golden and you can get work into old age if you can keep up, and make LOTS of money in the process. That’s why LA is so notoriously networked. It’s all about who you know and what you’ve worked on, because as your reputation as a producer improves, so does the quality of people who want to work with you, and the more likely it’ll be for the studios to give you big piles of money. The film industry is one with vastly more people looking for work than actual work, and the production house system lets the cream rise to the top, in theory at least. While it seems like it would be cheaper to have everything under one house, it’s important to note that the film industry adopted the production house model to reduce overhead as well as risk while improving the quality of the films produced. Despite my significant dissatisfaction with most of the pablum produced by the film industry, after looking at the stuff made in the 60s and 70s, I’d have to say it’s been largely a good choice. The competitive pressures of the production house model help to ensure that the best managers are in control of most of the money in the industry.
Is the film industry the same as the game industry? No. The process of building a game is a different thing, with its own unique goals and challenges. But can the game industry use the business model utilized by the film industry? Absolutely. The increasing use of outsourcing makes it increasingly possible. As specialized companies rise up to provide the very best quality available in their specialty at a price comparable to doing it yourselves in-house with no worries of overhead, then we will of course see the small production company rise in popularity within the game industry. I don’t think that the studio model will be supplanted, but I do believe that once a major hit of the Half-Life or World of Warcraft variety is produced via this method, we will rapidly see a vast shift, specifically with regards to expansion of the industry. Starting a development house these days is a daunting task. Smaller companies committed to doing one thing perfectly just makes good business sense. The small production team is the natural outgrowth of that market trend. It’s sure not going to happen overnight, but we will see it happen.
Recently, Chris Crawford has been making waves by claiming that games are dead at the hands of an industry that has forgotten how to innovate. I certainly wouldn’t make so bold a claim as that– Alyx in Half-Life 2: Episode 1 demonstrates a remarkable piece of advancement in characterization, artificial intelligence, and narrotological methodolgy. His own Storytronics project, from my understanding of it, represents a particularly potent potential for innovation in storytelling within the medium of video games. On the level of pure design, Will Wright’s Spore represents huge leaps in applied computer science, just as The Sims expanded the very boundaries of gaming. Meanwhile you’ve got Guitar Hero proving that a unique controller can make all the difference in the world, and still there’s Katamari making perfectly clear whether voice acting and realistic graphics are universally important or not, and even within the realm of realism, Crysis has within it the closest thing to a living jungle ever seen in a game.
Clearly innovation still abounds. Though we aren’t seeing completely unique concepts of play exploding into one new genre after another, I think it would be foolhardy to conclude that we are therefore at the end of the road for gameplay concepts. Please remember, folks, that film as a medium existed for nearly half a century before Citizen Kane, and it was realistically 50 years before they came up with most of the technology still used today. Comic books were popular for 50 years before The Watchmen . Painting has been around for 10,000 years or more, and only in the last hundred years has there been a Picasso.
Much like every other medium, there are really no hard and fast rules to making a good game. “Add nice graphics,” or “Make sure the gameplay is fun” is hardly a schematic for making a good game, and could be roughly equated with back seat driving, telling someone to be sure to remember to use their turn signal when their turn signal is already on. That said, there have certainly been some attempts to give a basic grounding in what design principles work or don’t work in game design, by a great many individuals. There are a fair number of similarities between these authors (which is unsurprising, since they all read each other and come from similar backgrounds in the industry), but what I personally find more interesting is the differences between different authors, and what metaphors different designers have found most effective for them.
One of the earliest books I read this semester was A Theory of Fun by Raph Koster. While not explicitly about game design per se so much as a discussion about the fundamental concepts of fun and play, Koster does also explore the method he finds most effective for game design. His metaphor is based around his theory that games are fun because the brain is constantly seeking patterns to process. With that in mind, he tries to find new patterns for the brain to process by thinking of a verb that would encapsulate an action or series of actions, and then designs the game mechanic around that verb (or if the game is expansive enough, verbs). From a ludological perspective, this is a very appealing method of design, since the game mechanic quite literally designs itself. This does not leave much room for a narrative-centric game, however.
Video games are currently facing a slew of legislation attempting to ban or criminalize the representation or discussion of some topics within video games — effectively censoring what can be made in games, or even what can be defined as a game. This is hardly the first time this sort of action has come up, however, if you look back to other contemporary forms of media. In the early 20th century, photography had split into factions on the nature of photography as an art form. The division was between a style known as pictorialism, which allowed and encouraged image manipulation and pre-composition, and straight photography, which disallowed any pre or post-processing manipulation of the image. About the extent that was allowed in straight photography was some dodging and burning applied during the printing process. These two factions each had an advocate in the public fora, notably William Mortensen on the side of pictorialism, and Ansel Adams on the side of straight photography. The debates often became heated between the two, with Adams becoming the winner by default after Mortensen passed away. There was also some dirty pool played on the part of the straight photographers, who deliberately removed any but the most cursory mention of pictorialism as a photographic movement in Beaumont Newhall’s work, The History of Photography from 1839 to the Present.
My personal contention is that this turn of events has significantly marred the public view of photography as an artform, encouraging the mindset that photography is simply an objective view of what is or was (which is not the case even within straight photography). It has taken decades and a fundamental paradigm shift in the realm of photography (ie digital manipulation; Photoshop, Painter, et cetera) to even make a dent in this perception, with considerable inroads still needing to be made. This denial of the more expressive, authorial form of the medium encourages the public view of the medium as a sort of stepchild to more accepted forms of art.
There has been a considerable uproar about Nintendo’s choice of name for their new system in the days following its announcement. I’m not going to get too much into the reasoning or opinions about the name, since those topics have already been addressed ad nauseam by most of the web. Instead, let’s look at some of the facts surrounding ‘Wii’. First of all, love it or hate it, everyone is talking about this new system, which is a marketing coup that is hard to ignore or downplay. This buzz is also mere days before the Electronic Entertainment Expo (E3), where they have scheduled a major press conference to announce further details about the console, meaning additional time in the media spotlight.
What’s particularly interesting, however, is that they also used this buzz to gloss over their announced release date, which is apparently not until Q4 of 2006, which was covered by only one major gaming news outlet. This will be confirmed and properly announced at the E3 press conference, but it’s still interesting. It is also worth noting that even amid all this attention, Nintendo has still remained tightlipped about the technical specifications of the system. They are, in essence, generating an unprecedented media buzz over a system that no one knows much about — we know that it uses an innnovative new controller, and that they’ve opted not to pursue High Definition with this console generation. That’s about it. There’s been no gameplay footage to speak of, though there have been several high profile companies signing on to develop for the Wii, and they even had a prototype mockup in a locked display case at their booth at the Game Developer’s Conference this past March. There has been a not insignificant amount of speculation about the specifications of the machine, but Nintendo themselves have been quite tightlipped about it.
I must say, I’m rather impressed by this little gambit. Satoru Iwata gave a keynote at the Game Developer’s Conference about disrupting the industry, and from the looks of things, that’s exactly what they’re aiming to do. My vote is more power to them: we need to shake things up a bit, and show that there is more breadth and depth to games and what games are than is commonly accepted today. There’s more to that than simply deconstructing what came before, just as there must be more than just a new marketing campaign. To borrow a trendy slogan, it is not enough to Think Different. We must also Do Different. Nintendo is certainly showing signs of putting deeds to their words, and I only hope that it proves to be true.
This was the last day of the conference, and you could definitely feel people were getting worn out. I didn’t manage to make it through the expo before it closed, which is unfortunate but not the end of the world, and frankly the panels I went to were more important. I managed to make it to all three panels I’d planned to attend, albeit I got into the first of the day about 20 minutes late due to the shuttle hitting some traffic. All three were about methods to create a new game company, and essentially different routes people took to do it.
The first session was about bootstrapping a company, and mostly worked on a “work for hire”/contracting system to raise cash for their internal projects. This was held from one of the guys at Demiurge, which is based in the Boston area, and it’s worked quite effectively for them. We swapped cards, and I’m hoping to make it down for one of their game nights in the not too distant future, for the socializing if nothing else (I definitely took the advice from my first panel this week to heart, about encouraging you to surround yourself with a brain trust of people smarter than you).
The second session was about taking a game from design to product as an independent developer. The speaker had started his own company, and put together a game for about $25,000, “and could have done it for $10,000 if I knew then what I know now.” This was definitely encouraging to hear, and while a lot of his advice was common sense to me, it was still reassuring to hear that it’s still possible to do what he did.
The third session took a different tack to starting a company, and went the venture capital route. It was held by the CEO of PlayFirst, which had just completed it’s second set of fundraising ($5million in the first round, and another $5million in the second). It was interesting to see the difference in presentation between the three meetings, with this third session being significantly more business-like and number crunching in nature. It is both more intimidating, and reassuring to know that the money is out there, though. I don’t think venture capital is the route I personally want to take, but I’m not averse to it, and managed to swap cards with a VC who was in the audience that focuses on startups in the tech and media sectors, for seed and series A funding (30k to 2million). This could potentially be immensely beneficial, should I choose to pursue this route (especially since one of the things they bring to the table is financial and business tutoring to help you get your business running solidly… that’s something you get out of the deal. They usually aim for the 5-15% range for a stake in the company, which is acceptable. I may actually put Kevin in touch with them for UberCon, especially since they’re based out of DC).
By the time the last session ended, the convention center was a ghost town compared to the crowds that had been there all week. It was strangely refreshing, though it did very little to bring closure to the event for me. I took the shuttle back to the hotel, and spent the rest of the evening playing Brain Age… my current brain age is 49 (lower is better, range is from 20 to 70)… lot of work to do on that. I completed about 12 sudoku puzzles, though.