zyntroPICS and Foamation launch new NFCheese Product Line

Our imprint, baubleApp.com, has been working closely in a partnership with cheesehead.com to create a converged product:

*  Fan Merchandise

* NFC tags

* Player engagement

* Fan Engagement

nfcheese_home

nfcheese_sales_page_keychain_v3

Unifying fan merchandise with sports marketing is a fascinating way of merging “real world stuff” with mobile.  We feel this is the beginning of a long relationship with our partners at Foamation.  The possibilities are endless.

Interactive Video…Popcorn, Flash, whatever…

As we’re enmeshed in focusing some of the last year’s work with popcorn.js into services for clients as well as rounding out and releasing our proprietary work, part of the long-term value of working with Mozilla’s Popcorn suite of tools (Butter, popcorn.js, Popcornmaker, etc.) is that it is HTML.5 based – for both audio and video.

That said, the platform has shown limitations in terms of it’s “mobile friendliness,” but those are known issues as it is restricted to only working with certain Browsers (primarily desktop versions).  But, it is constantly improving.

It was a bit surprising  last week to see a newly launched interactive video platform from General Mills  – build entirely in Flash.   It’s interface and UX are seamless – PLUS, the focus is on the content itself, not the buttons or design —  goes to the heart of “why” an End User would like to play with an interactive video authoring experience – Because the media assets rock.

Capture

It’s an impressive effort.

It’s over here, under Buzz’s Movie Maker from Honeynut Cheerios:  http://honeydefender.com/MovieMaker/

Also, featured on GOOGLE’S Home Page, is a release of an HTML.5 interactive video authoring tool:  https://www.peanutgalleryfilms.com/  (must access via CHROME).

snip1

It is using a speech-to-text engine to create text based interstitials in a silent movie.  That’s an extremely clever approach, especially in the HTML.5 universe, since it doesn’t require transcoding audio to HTML.5 formats, but, uses Google’s existing speech-to-text engine and then generates a text-based output.  It was great to see something entirely different in this space, and this qualifies.

Noting, Google’s efforts also rely heavily on a great selection of underlying media assets that are perfectly suited to the interactive format  That complementary fit is what makes it all work.

Now, Buzz’s Movie Maker doesn’t really allow someone to do much more than sequence media clips and choose an audio file.  Interstitial graphics can be dropped in; it’s hard to even know whether it’s nonsense based or there’s a method to create a cohesive story?  But, you know what?   It’s kind of fun to play with.  While the UI is simple, what makes it work is that (it seems) 80% of the effort was directed at the content and media assets Users can manipulate.  Creating text-based interstitials via “voice” is certainly an enhanced feature that Google have integrated, allowing Users more personalization of the experience.

The question we had posed to us was, “which is better, Flash or Popcorn?”

Well, there’s no right answer at this point in time.  Both have limitations when it comes to mobile and (some) tablets.  Both have other upsides and downsides.  Certainly, Flash is easier to manipulate with regard to how traditional Designers think.  But, for the long term?  That’s where we’re putting our efforts into development with Popcorn’s suite of tools.

It’s been awhile since we’ve seen a pure Flash-based interactive video tool – In some ways, it was nice to know that somewhere, someone, is still approving Flash development to progress interactive video…there are still things to be learned from those development cycles as we all work on how to make interactive video about story-making, not just about clicking buttons.

Interactive Video – Again…

zyntroPICS was recently engaged to produce a significant interactive video project for a major U.S. network.

While most of the company’s focus has been on conversational mobile apps; with a keen eye toward applying that technology to 2nd Screen apps (extending television story and character into interactive engagement), for some unknown reason, about every five years, we get deeply entrenched in interactive video technologies directly.

How long as this been going on?

Since laser discs.

Really.

We’ve been around awhile.

The first foray into interactive video was to device creative techniques to expand one of the first live-action video game platforms to incorporate concurrent, multiple on-screen “threats.”  The original platform could only present a single “threat” at a time.  As the platform was being applied to training simulations, it needed to evolve.   This was both technically and creatively challenging.  We did it.

Other interactive video projects have included  a range of experience with full motion 360-degree video (Where the most fascinating projects were were we started to include 3D animated characters and sound design); as well as branching storylines from multiple camera angles, all back when delivery of just a single video stream was a challenge.

Now?

Well, keep an eye out here for updates as we can make public disclosures and show screenshots and Links.

How we also tie this type of development work into our own conversational apps is also on the near-horizon.

Happy 2013 – The Year of 2nd Screen and IoT

contentAI studios and zyntroPICS Inc. wish everyone a wonderful 2013.

From our side, the focus of our ventures have narrow-focused to extending storied content experiences to 2nd Screens and the “internet of things.” (“IoT”)

Initially, we will be extending our own children’s app properties, which have done exceptionally well on mobile, to IoT products. Call them interactive toys, jewelry or companion products, we believe that especially children’s engagement is greatly enhanced when there are real-world objects included with the mobile/digital experience. This can allow for an individual’s imagination to take the object and make it part of their play and their own stories.

We’ll be announcing partnerships with some fascinating technology partners in the coming months. We’ll also be more active on the conference circuit as Exhibitors.

We hope it’s a terrific 2013 for All.

Thinking “2nd Screen” First

We’re spending a fair bit of time over at our contentAI studios venture discussing and exploring technologies to better “extend” story and character from television screens over to so-called 2nd-screen apps (with a preference for mobile web over native; but that’s another story).

While there is a great uptake in interest to develop 2nd Screen apps  — And, the ability to allow the audience to directly “chat” with characters is remarkable – Ultimately, as storytellers, the fly in the ointment is that these are conceived of as “after thoughts,” they are not inherent to the series, the characters or the story arcs.  Doable?  Sure.

But…

Getting back to our roots, we see the need to be developing television content that anticipates concurrent 2nd screen use…First.

Not as a gimmick.  Not as “complementary.”   But, as a tool within the storytelling itself, where the audience only understands the entirety of the first screen through their participation in the second screen.

Risky?

We don’t think so.

What are we working on?

Something special.

contentAI studios – An Interactive Scene Engine

Introducing our contentAI studios platform to an ad agency executive over the weekend, we spent a fair bit of time “defining” the applications that are built as “interactive scenes.”

Interestingly, that phrase is not one we’d used before, but, it helped (let’s call him “Bob”) Bob quickly understand the contentAI platform in relation to his other work.

We often talk about “motivated characters” or “virtual characters,” but, what we really do is create “scenes” that both the virtual character and the End User play-out.

So, are we really a “scene engine?”

Yes, in part.

But, we’re still a “virtual character” engine as well.

There are both simple and complex avenues to apply our platform. We think of “character only” as being akin to chatbots who access a database of deep information via Natural Language Processing.

But, our interactive scenes are 3-dimensional, including depth (same as “character” – deep data and knowledge), coupled with width (alternative paths) and length (all “scenes” have a beginning, middle and end)

I suspect we’ll start using “interactive scene engine” in some of our description phrasing more frequently. It seems easier to grasp than “motivated characters.”

The MPAA Says, “Get Connected.”

The MPAA are traditionally behind-the-curve (let’s say, “historically” on many issues).

So it was interesting to see this statement from their CEO, Chris Dodd:

http://www.homemediamagazine.com/piracy/mpaa-ceo-hollywood-must-get-connected-27074

“Our business has become much more than simply making a great movie and inviting our customers to a theater,” Dodd said. “This new age of the connected consumer is here, and so we must adapt.”

Driving more movie attendance through deeper audience connections with story and character (from their 2nd Screens) is our favorite topic.

Now, the issue for “Hollywood” really focuses on the Unions and their abilities to adapt to incorporate Talent (Writers, Directors, Actors) across multiple screens beyond “marketing” spends, but where it’s inherent to the Production and story itself.

Hollywood studios must embrace younger moviegoers on their turf — through connected devices …— former Sen. Chris Dodd, CEO of the Motion Picture Association of America, told an industry gathering.

OK, sounds good.  Now, let’s start extending story and character to the 2nd Screen in a format that is intuitive and natural.

2nd Screen Apps & A New Mindset

Succinct article from RAI on the need for sychronized 2nd Screen Apps and a “new mindset” from broadcasters…this is a major focus for zyntroPICS and our contentAI platform, where the ability to “extend story” across screens was the raison d’etre for building the platform.

Of course, we see that that “new mindset” needs to begin with writers, directors and producers — both from long-form, short-form and ad-content.

http://www.v-net.tv/broadcasters-need-second-screen-apps-and-a-new-mindset/

Nice wrap up in the article about the obvious questions:

What can third-party apps provide that a broadcaster cannot?; What can programme owners/originators provide that third-parties cannot?; and What is the market for synchronised advertising, and who gets the money?

Multi-Screen Story Opportunities and Eyeballs

There is an excellent Deck, presented by FLURRY, during IGNITION WEST last week HERE

The two slides that really stand out — Specific to contentAI — Are related to 2-screen engagement times (when the television AND the mobile device are BOTH in use) and the ratio of ad dollars to consumer time (mobile spending will increase exponentially over the coming years to play “catch up”):

 

 

Those two slides tell a remarkable story with regard to opportunities for extending television content, both programming and ad-units, to mobile experiences.

After all, 50% of “Location” is the couch.

What interactive/digital folks don’t seem to “get,” is how they continue to consider “television” as “passive/lean-back” engagement.

They don’t understand that it is active, in that it is emotionally engaging.

Extending the emotional engagement, vis a vis, interactive/personalized virtual characters is what our contentAI studios platform was designed to deliver.

Are we finally in the day and age of “convergence” (about 15 years after the concept was first introduced)?

Pretty Trees, but it’s the Forest that Fascinates – CES 2012

Thoughts on INTEL @ CES

27 JANUARY 2012

contentAI studios | Portland, OR | http://contentAI.com

 

True Story:  Once upon a time, a major motion picture Studio had one person assigned to traveling to their global offices to see if the films in development or production had any World Wide Web needs or if there might be any cross promotion potential?  The Distribution, Production and Development executives all said, “no.”

Which happened to be at a time when we were tapping into an online (remember Compuserve!) fan base for a series of novels that were being developed for a motion picture property (which we’d already licensed Electronic Game and merchandising Rights to) – The absolute hub of our activity was our Property’s URL and it’s Forums.  For us, all of the pieces fit together into one large User experience to dip in and out of from various locations.  The term “transmedia” hadn’t been invented.  We didn’t know what we were doing, other than knowing that the Whole Enchilada was a lot cooler than the individual ingredients.

Fast Forward +/- 15 years into the Future.  Today.  OK, technically  a couple of weeks ago at CES in Las Vegas.

The most exciting space for us was the INTEL® booth – OK, “booth” is used loosely, it was the INTEL Command Center at CES.

Featured were INTEL’s “trees” that we saw set up around the Command Center at their disconnected workstations.  Typically, it was different divisions and technologies and their team members focused on their silo of interest, including:

  •  Ultrabooks– OK, we love them.  We use them for coding and building our apps (picked up an Asus U21 the first day it shipped)
  • AppUP – Desktop apps for Windows machines; with an amazing team working behind the scenes to make the process rapid and enjoyable (See:  Encapsulator).  What an amazing platform and reach – Whether for Enterprise or for Education – Or, for home…(more on that in a minute)
  • WiDi – Huh?  Wireless HDMI to bridge between the devices on your couch and your big screen
  • Ultrabooks & Nuance Deal Lost in the press releases was a remarkable partnership announcement to advance speech recognition on Ultrabooks (yes, that Nuance, the one that really does a lot of the heavy lifting for SIRI).  No mention of this on the floor.
  • Smart TVFormerly the Digital Home Group, the device(s) to bridge from big screen to on-the-couch interface continue to expand.  While we saw competitors such as Panasonic (Vierra) and others all migrating to the “television app store” experience, the INTEL group, when coupled with other offerings within INTEL is what creates the groundwork to cohesively extend television to handheld devices.

You see, we at contentAI studios are really “content people.”  We’re storytellers.  We’ve worked on motion pictures, television, internet television and interactive television…oh, and mobile experiences.

Why is INTEL® massively exciting for us?

Because the “future” we thought was 2-5 years away is already here today.  If you just connect the dots.   If you envision how those silos all interconnect at a content experience level. ..

Our contentAI studios platform was originally created to produce emotionally engaging, personalized interactive experiences with film and television characters on hand held devices.

It looks like this:

That’s the image that’s been in our PowerPoint® deck for about a year.

The problem is this is “disconnected multi-screen engagement.”  What’s needed are cohesive experiences that bridge the User Experience between screens — where story is extended…where emotional involvement deepens…

The idea that the Audience can engage in one-to-one, personalized “conversations” (text or voice) with a character on television (Pause the linear show and engage in a one-to-one chat); where the consumer discovers new and alternate storylines…where Brands have all new interactive real estate (in someone’s hand).  All possible.  Now.  Today. #wayCool

When we looked around at INTEL’s “trees” at CES, we saw the forest.

We feel that in order to make this truly exciting, the content that is offered needs to be more than games or fancy new, intuitive cable menus.  The content needs to connect on an emotional level.   After all, “television” was always a storytelling device in our homes.  Tapping into that engagement level is what will both sell devices and also satisfy the new interactive audience.

And, what about the opportunities for retail solutions with these same tools?  Absolutely possible.

Where does will it start?

With the question:  Why doesn’t every Saturday morning cartoon allow kids to directly engage with the characters via a conversational interface?

We know the issues from the Television production side.  Someone needs to slap the Unions on the upside of their head so they don’t prohibit Writers and Actors from participating in these new storytelling formats.  Union contracts need to be “living” documents that can be changed year-round to adapt to emerging technologies (rather than showing up 5 years late to the party).  But, that’s another blog post…for another day…

But, the “forest” is much wider and deeper than Saturday morning television – with the contentAI studios’ platform solutions alone, we see ESL schools in China using these tools to improve conversational English.  We see in-store Retail “intelligence” also being delightful and intuitive. . .and more. . .because there’s always more. . .

So, now we need to figure out how to tie the pieces together as a Developer.  Heck, I can’t even tell if my Ultrabook has WiDi?  Or, what device I need to make it so?  Or, if the Smart TV group have an App Store, or if they will be leveraging AppUP?

To navigate through the forest path at INTEL, we are fortunate to have a Senior Community Relations executive who can help steer us.  That kind of one-to-one relationship between INTEL and the Development Community is remarkable – We’ve been extremely impressed with their AppUP team since early 2010 and look forward to weaving our way through more branchs of INTEL in order to realize the potential, from a content Developer’s point-of-view, of their astounding technologies.

While HTML.5, Ultrabooks, WiDi and other technologies all link to one another, it’s the human component within INTEL® that serves as the Pandoran Neural Network…it’s humans that glue it all together…fortunately,  corporations have evolved in the past 15 years compared to  when different motion picture divisions ignored each other (especially digital divisions; and, um, Motion Pictures studios are now paying the price for such early ignorance).

Seeing INTEL’s forest, as an outside Developer, made the trip to Vegas worth every long line, worn out pair of shoes, over-priced everything and endless package of mints that were required for the trek.  For next year’s CES, seeing these devices all playing nicely together and creating all new content experiences is what we’re looking forward to and hope to be a part of.

 

#CES2012