PSA: Your Verizon.net email address will stop working

If you do NOT have a verizon.net address, then don’t gloat. Check your inbox for friends & family who use it, and give them this info.

If you DO, then either (1) Verizon has cruelly and rudely informed you, with short notice, that they will to end your verizon.net email service, or (2) Verizon will do so “in the coming weeks.” But you can save your email address.

You must wait to receive the notice (in email and/or when you login to Verizon webmail), but you must act quickly, within the deadline you are given. They promise 30 days, but some people got six days notice.

Context: Verizon is ending email service, as obnoxiously as possible, because (1) they now own AOL, (2) they don’t want to do any avoidable work, and (3) they are thoughtfully reminding us of the historic inability of the telecommunications sector to deliver a user experience that isn’t horrific. If you don’t act, your verizon.net address will stop working. I wrote this because I am close a few Verizon victims.

The good news: You can preserve your verizon.net email address.  (And you want to do so.) Even if you’re not using it actively, if you ever did use that address there surely are people you care about who never entered your newer email into their address books.  It’s worth the few minutes to have it preserved as yours, forever, including after you drop Verizon entirely.

To get it done:  Continue reading “PSA: Your Verizon.net email address will stop working”

Which reality: Are VR & AR over-hyped? Or inevitable and transformational?

[This analytical essay was published on Digital Media Wire.
Below is the pre-publication draft.]

Even if you’re not certain what’s meant by the buzzwords “Virtual Reality” and “Augmented Reality”, you have surely heard their growing buzz.  This year’s Game Developers Conference includes a two-day VR Developers Conference, and GDC’s Expo will feature at least 4 VR headsets and 70 VR games. While the game industry is consistently an early adopter of new interactive technologies, VR is already a multi-media phenomenon: a “virtual reality experience within Amazon Video” is in development, film festivals are featuring VR movies, and there’s a VR broadcast of the Coachella Music Festival. VR & AR are enjoying rapt attention from both industry press  and general news media.

The new display technologies are also getting financial attention. Facebook’s startling 2014 acquisition of VR developer Oculus was big news, based on the huge price: $2 billion. Another 120 VR deals in 2015 drew another $632 million from dozens of firms and funds, and inspired the creation of VR/AR-specific funds and incubators.

Will this excitement inevitably drive the proliferation of VR & AR experiences? Or will it bring VR & AR to an early Peak of Inflated Expectations, followed by a descent into a deep Trough of Disillusionment? (To borrow the fantasy-fictional jargon of the autological Gartner Hype Cycle.) Will VR & AR become as ubiquitous as touchscreen displays? Or are the ballooning expectations dangerously over-inflated? To all these questions, the answer is “Yes, but relax about it.”

Despite all the talk about VR & AR, the words themselves are vaguely defined. For many years, VR consistently referred to “an exciting rendering technology, where I cannot afford the peripheral.” In 1980, this included the simplest possible real-time 3D rendering technology, but the rapid proliferation of personal computers in the next few years brought real-time 3D to the desktops of the masses, and real-time 3D on a flat screen was no longer considered “VR”.  In the early ‘90s, haptic feedback was VR technology, but Microsoft’s 1997 introductions of affordable force-feedback joysticks and steering wheel were welcomed as game accessories, not “VR devices”.

In other words, until recently VR was an aspirational buzzword; it referred to technologies that were not yet ready for widespread consumer distribution. Another aspirational buzzword is Artificial Intelligence, which essentially refers to decision-making or semantic-modeling technologies that are not yet fully feasible. Out of AI have fledged such important technologies as predictive analytics, voice interfaces, and robotic vacuum cleaners, all of which are no longer thought of as AI technologies. Similarly, Big Data denotes datasets whose analysis is not fully feasible, and Home Automation covers exactly those systems that are unwelcome in my house: a generic 7-day programmable thermostat is not an example of Home Automation, but the Nest Learning Thermostat certainly is. The Nest autonomously downloaded defective software this January, abruptly shutting off heat for many homeowners, amidst record-setting cold.

For some of us, VR continues to denote rich, multimodal, real-time simulations of reality. But today the phrase predominantly refers to any one of a number of headmounted displays that include stereoscopic video output, while reading sensors that indicate position, movement, and perhaps location. In other worlds, today VR usually means “a bucket over your head, with a video projector inside, and a lot of sensors.”

Augmented Reality, in the usual modern sense, refers to that same VR bucket, except AR’s bucket has transparency. This allows the user to interact with the real world, overlaid with fully-reactive computer imagery.  That distinction accounts for the many game designers and computer scientists who are unimpressed with VR, and deeply enthusiastic about the potential for AR. As one computer scientist notes, AR “has the entire world and much of human experience as raw material to be augmented,” into which it can introduce virtual objects or relevant information, by stark contrast to the fully-immersive VR experience. While enjoying a VR (in the newer, headset-wearing sense of VR) experience, it’s unwise to get out of your seat, let alone walk around and interact with your environment; you are fully blind to the world. In an AR experience, you might see your hands, as they create magical items, or mundane craftworks. And as AR systems learn to map and model the environment around them, your virtual creations could be place on your actual living-room mantel, for the viewing pleasure of anyone who shares that AR-enhanced view.

Given the overlap in VR & AR technologies, particularly in terms of sensor-enhanced head-mounted display technologies, these distinct concepts are often lumped together, as in this article. Or one might be used to encompass the other, as in GDC’s Virtual Developers Conference, which targets creators of “immersive VR (and AR) experiences”.  Google’s Noah Falstein made a brave and reasoned effort to posit “Transmogrified Reality”, to include AR & VR, while highlighting the power of their effects.

As with VR, the meaning of AR has evolved. It historically has referred to any computerized output that overlays the real world. For example, the “3D Compass” app has long been a simple, if useful, example of AR. The app superimposes a compass display atop the real world (as displayed on a smartphone screen, via the phone’s camera), while showing an oriented map in half the screen. As with VR, that older sense of AR remains at large, and includes augmentations in the form of sound or text, but the usual usage refers to head-mounted video or graphical augmentation.

Finally, AR as a buzzword has shared an aspirational quality with VR; a fighter-pilot’s heads-up display, with essential data projected onto the windscreen, was recognized widely as “AR”. But as the same HUD appears in consumer automobiles, the driver accepts it as simply how she gets to see her speed, or route, without having to take her eyes off the road. If that HUD is referred to as “AR”, the speaker is probably a marketing professional.

Although AR might have the widest range of application, bringing data-display into interactions with an airplane’s wiring harness, or with a surgical patient’s peripheral arteries, VR, too, has applicability beyond games, or video entertainment. Game designer and author Raph Koster wrote an early analysis of Facebook’s $2 billion Oculus acquisition that underscored the importance to social interaction of presence, that quality of rich interactive connection that will ensure eternal demand for physical college campuses in the face of rapidly-improving online courseware, and for physical conferences in the face of online-collaboration technologies. Facebook’s core business remains one of human connection. The potential for VR to enrich that connection logically motivates the Oculus deal, even if the nature of VR-based social-networking interaction remains unclear.

Indeed many details of the future of VR & AR experiences are unclear, even if the potential is compelling. Competing VR displays now span a range from Google Cardboard, which was distributed free to New York Times subscribers, to Microsoft’s $3000 HoloLens Development Edition. Somewhere in that spectrum is a threshold of “good enough” for broad consumer demand, for any given application of the technology. Elsewhere in that spectrum is the corresponding threshold of “cheap enough” for that demand to be satisfied. As these thresholds converge, the promises of VR & AR could be realized.

There’s just one problem: when it comes to entertainment media, technology can be the easy part. It can take years for content creators to find the right application of a new technology, and to design the content that takes advantage of it. During that time, a medium can be “huge, just two or three years from now!” for over ten years.

That first compelling application that paves the way for an entire platform is the original sense of “killer app”: the original spreadsheet program, VisiCalc, released in 1979, drove the success of the Apple II computer and motivated IBM’s release of the PC. Similarly, a single game title can reveal the potential for an entire genre of experiences. (This applies to the extent that I define game genre as “a hit game, and its imitators”.) Each VR-device manufacturer is seeking its own killer app, which will probably be an entertainment experience. AR’s wider applicability might lead to its success emerging from a wider range of genre-defining experiences in various industries or content categories.

While new interactive technologies can make many new experiences possible, not all of them are appropriate. Touchscreens, once a rare and exciting technology, are becoming commonplace, appearing everywhere. Unfortunately, this includes touchscreens serving as the main systems-interface in automobiles, replacing the knobs and dials that had allowed drivers to keep their eyes on the road.

When real-time 3D animation was new, there was similarly ill-conceived over-application of that technology. In the mid 1990’s, retailers were excited about bringing the sales process online, which history has shown to be a wise impulse. But many of them sought to do so with a VR experience, which at that time meant a 3D-animated simulation of a real-world in-store shopping experience. The results were very high-tech, and visually exciting (for its day), but also an efficient means to bring all the inconvenience and frustration of real-world shopping to the otherwise-efficient online store.

Another pathology of new media is often “shovelware”, the careless, hurried redeployment of the previous medium’s content onto the new one. When CDs were new, “multimedia” content became the rage: encyclopedias, textbooks, courseware, and games were all compelled to appear on optical media, with music, animations, video, and whatever else would exploit the new technology. This did not last as a medium in its own right. But the integration of sound and images with a broad range of content did become commonplace, even while “multimedia” became a term of derision. (“I survived the multimedia scare of 1993.”) The technology succeeded, even as it disappeared as a product category.

The multimedia era showed that the success of a technology need not correlate with the success of the innovators in that technology. Multimedia was not kind to its parents. Similarly, even as the personal computer industry grew dramatically, bringing computers into every household, and later onto every desk, the PC manufacturers suffered.

The fact that Internet Service Providers prospered in step with the growth in Internet access reflects their monopoly position, granted by municipalities in the 1980’s when Community Access Television (aka CATV) was seen as an important public good, to enable access to the broadcast (over the air) television signals. The thousands of cable companies that became today’s Comcast (and its very few competitors) were each local monopolists. That was an unusual trick of history; today’s innovators would do better to heed the warnings from multimedia, from personal-computer manufacturing, and from the various console-game platforms that failed to build a roster of compelling proprietary content.

VR & AR offer an inherent value that has led investors, manufacturers, and content developers to a shared confidence in its future. This distinguishes VR & AR from 3D television. 3DTV was driven by television manufacturers, who were desperate to find arguments for consumers to replace their perfectly good large flat-screen TVs. The content industry experimented in the medium, and turned away. At best, a 3DTV production could hope to resemble a 3D movie: an incremental enhancement to an already well-defined experience that remains fundamentally unchanged. And an enhancement delivered through burdensome production costs, with mixed results.

With time, the creative balances are found, and the truly valuable technologies become prevalent, even ubiquitous, exactly while they become unremarkable. A cynic might snark at the way an “aspirational buzzword” such as AI might apply only to those technologies that are not clearly feasible, but the value of a field such as AI is proven by the wide range of its alumni. The success of VR and AR similarly will be proven by the casual acceptance, to the point of disregard, with which consumers will greet the most engrossing entertainment platform, or the most enriching workplace knowledge base.

 

Dan Scherlis is an executive producer of health games, including the NIH-funded BreatheFree smoking-cessation intervention. Dan was founding Content Director of Comverse Mobile Games. At Turbine, he was CEO, and Producer of the Asheron’s Call MMO.

 

Game Developers Conference 2016 launches today, with inaugural VR Developers Conference

[Below is a pre-publication draft of an item that will appear later today on Digital Media Wire. The below will then be replaced by an excerpt of the final version, and a link. This piece is basically a frame for my longer analysis & opinion on VR & AR.]

The 30th Game Developers Conference today begins its week-long occupation of San Francisco’s Moscone Center. In addition to the usual collection of one- and two-day “summits” that precede the core Wednesday-Friday conference, this year’s GDC includes a new two-day program. “The Virtual Reality Developers Conference (VRDC) is a new event for creators of amazing, immersive VR (and AR) experiences.”

GDC’s promotion of the VRDC, and the event’s “new conference” status, reflects the fascination with VR & AR that is widespread, but perhaps deepest in the games industry.

The new VRDC includes two tracks. A “Game VR/AR Track” for game developers, and an “Entertainment VR/AR Track” for “multiple industries including filmmaking, travel, retail, fitness, product design, journalism, and sports.”

For the GDC to devote a track to non-game content would be consistent with a transitional status for VRDC, co-located with GDC until it proves itself capable of independent flight.

And VRDC is off to a strong start: VRDC-specific tickets are sold out.

 

Dan Scherlis is an executive producer of health games, including the NIH-funded BreatheFree smoking-cessation intervention. Dan was founding Content Director of Comverse Mobile Games. At Turbine, he was CEO, and Producer of the Asheron’s Call MMO.

 

I’m a Health Games Guy, These Days

I’m writing this as I arrive at the Game Developers Conference. For me, this is an annual reunion with some people I admire, respect, and enjoy. (I also hope to go to some sessions.) As happens with our annual milestones, I instinctively compare myself to my last-year iteration. I’ve a different business card and self-identity. And I’m part of three projects and teams that I enjoy:

I’m starting with a personal note, but I’ve some thoughts on a new medium:  During the last year, I’ve happily transitioned from “game executive who’s looking into different areas” into an enthusiastic “health games executive producer”.  I had been advising a couple projects, and as they gained momentum, I gained insight into the peculiar needs and opportunities of this space.  It reminds me of the first years of we later called massively-multiplayer games: it’s the frontier. Me, and my fellow expatriates from traditional games, don’t yet agree on the best creative approaches or business models, but we share a confidence that this stuff will work. I mean: These can work out nicely for the companies deploying these games, and can work for the people playing these games.  (Our players, or should I say “patients”? Or maybe “customers”? During our testing they are “subjects”. But I suggest we avoid the game-industry’s “users”, shall we?)

And, as with MMOs, we’re grappling with a new context that makes new demands. The only reason for health games to exist, indeed the only motivation that justifies developing any “serious game”, is the opportunity to provide superior results from a clinical, behavioral, or educational perspective. I don’t remember the word “efficacy” being uttered ever, let alone regularly, in traditional-game product-planning meetings. I call myself an executive producer, which means I am likely to identify and contract the development team, to ensure a convergence between an engaging game design and an efficacious intervention strategy, and to manage and support the funder/developer relationship. As E.P., I am certainly focused on delivering a successful product, and on forming the partnerships or relationships necessary to success. For my current projects, “success” mean revenues and commercial leadership.

Heath games have not included very many commercial successes, with important exceptions in a couple sectors. Specifically: fitness, and mind-training or “brain games”. I think there are reasons for the limited successes: Few health games have started from a clear understanding of why a *game* should be the best delivery mechanism. Few well-motivated projects include experienced, proven game designers, without which any game is unlikely to be fun. And few of these are conceived and initiated with a clear understanding of how they will go to market, and of who will pay for them, and of why the payors should be expected to do so.

The odds appear to be long, which is only a problem if you are making a fair bet on a level playing field.  I don’t play roulette. I will happily enter any contest with a rich, long-shot-style, payout, but only if I’m playing with a team of ringers.

My column for DMW: Don’t clone my indie game, bro


Soon after arriving at this year’s Game Developers Conference (GDC) I was struck by the complaints — both in conversations and in rant-style conference sessions — about a rampant and increasingly practice of large game companies ripping off the work of smaller, independent developers.

When I spotted a clever little badge ribbon, one that clearly was not authorized by conference management, I wrote this column for Digital Media Wire.

Panel at Boston Post Mortem: Analytics & Metrics

I’ve assembled a panel for tomorrow night’s regular monthly meeting of Boston Post Mortem, aka the Boston Chapter of the IGDA (International Game Developers Association).  I’ve a business trip, so I’ll miss the session.  That’s a shame, because the panelists bring a wide range of perspectives on the use of analytics and metrics for game development:

I do enjoy putting together a panel.  But I also enjoy moderating, as well.  But, aside from my being out of town, Darius is flat-out better-qualified for this one.  Plus, I’ve been working for Sonamine, and thus didn’t really belong up there as his moderator.

Panel at Harvard: Evolutionary Biology Looks at Videogames (Who Plays Games and Why)

[Update: Added more links based on our discussion. More will follow this weekend.]

For a few years now, I’ve wanted to get a game designer (or two) into a serious discussion with an evolutionary behavioral biologist (or two).  Obviously we find games — specifically videogames —  fun,compelling, and sometimes badly addictive. But just what is it about those activities that is so rewarding?

I’ve finally rounded up the venue, the right scientists (Harvard’s Richard Wrangham and his colleague Joyce Benenson of Emmanuel College), and a couple esteemed colleagues (Kent and Noah). We’re on!

The event is Wednesday night.  It’s at Harvard, and walk-ins are welcome.  Below are the details for the event, from the Harvard page, and links to some supplementary materials.  I fully expect to add more links, based on our discussion.

I can’t resist noting: as I type this, there are no google hits for “evolutionary ludology.”  Here’s the vitals for the event:

Who Plays Games and Why: Evolutionary Biology Looks at Videogames

A discussion with Harvard Human Evolutionary Biology Professor Richard Wrangham, Emmanuel College Psychology Professor Joyce Benenson, and game developers Noah Falstein and Kent Quirk.

Wednesday, June 2, 2010.   5:30 -7:30 p.m. (registration begins at 5:00 p.m.)

Location: Harvard Science Center, One Oxford Street, Cambridge

Electronic games are competing with television for that essential resource: consumer attention.  But exactly who is playing these games? And what is their appeal? Indeed, why do people find games “fun” at all, from simple board games to immersive 3D fantasy worlds? Is there a biological reason that males and females play dramatically different kinds of games?

The many genres and formats of games will be surveyed in a brief multimedia overview, with a look at the different populations that play these different games. Then, human-behavioral scientists will collaborate with game-design professionals to explore the biological roots of our attraction to these experiences.

Please join this discussion, with:

Alumni and friends of the Harvard community: $10.    Undergraduate Students: complimentary

Supplementary materials for this session:

Articles and other online resources, general background:

Items mentioned during the discussion: [more to follow]

Books mentioned during the session: [more to follow when I can review the session’s recording]

  • Bowling Alone, by Harvard’s Robert Putnam, shows the decline in America’s “Social Capital” — by many measures — over recent decades. (I think this decline motivates our hunger for social engagement via online games, social media, etc.)
  • What Video Games Have to Teach Us About Learning and Literacy (2007) by James Paul Gee.  His short opinion piece in Wired speaks to educators and to game designers.
  • Rainbow’s End, a novel byVernor Vinge. (Recommended by Noah and Kent as a vision of augmented reality.)
  • Snow Crash, a novel by Neil Stephenson. (Mandatory reading for social-media industry participants. An early vision of virtual reality, with insight into our relationships with our avatars.)