Mashup Standards Part 3: JSONP versus CORS

In part 1 of this post, I covered the JSON-P "standard" for mashups. Not so much a standard per se, but a sneaky way to share JSON code between servers by wrapping them in a 'callback' function... For example, if we have our raw JSON data at this URL:

http://example.com/data.js

A direct access would return the raw data dump in JSON format:

{ foo: "FOO", bar: "BAR" }

However, a JSON-P call would return a JavaScript file, that calls a 'callback' function with the raw data:

callback({ foo: "FOO", bar: "BAR" });

Since this is pure JavaScript, we can use it to bypass the "Same-Origin Policy" for AJAX... A typical AJAX call uses the XmlHttpRequest object, which only allows calls back to the originating server... which, of course, means true mashups are impossible. JSON-P is one of the (many) ways around this limitation.

Since JSON-P is something of a hack, many developers started looking for a more secure standard for sharing JSON and XML resources between web sites. They came up with Cross-Origin Resource Sharing, or CORS for short. Enabling CORS is as simple as passing this HTTP header in your XML/JSON resources:

Access-Control-Allow-Origin: *

Then, any website on the planet would be able to access your XML/JSON resources using the standard XmlHttpRequest object for AJAX. Despite the fact that I like where CORS is going, and see it as the future, I just cannot recommend CORS at this point.

Security

Since CORS is built on top of the XmlHttpRequest object, it has much nicer error handling. If the server is down, you can recover from the error and display a message to the user immediately. If you use JSON-P, you can't access the HTTP error code... so you have to roll-your-own error handling. Also, since CORS is a standard, it's pretty easy to just put a HTTP header in all your responses to enable it.

My big problem with CORS comes from the fact that it just doesn't seem that well supported yet... Only modern browsers understand it, and cross-domain authentication seems to be a bit broken everywhere. If you wanted to get secure or personalized JSON on a mashup, your back-end applications will need to also set this HTTP header:

Access-Control-Allow-Credentials: true

And, in theory, the AJAX request will pass along your credentials, and get back personalized data. The 1.7 jQuery plug-ins works well with JSON-P and authentication, but chokes badly on CORS. Also, keep in mind that authenticated CORS is a royal pain in Internet Explorer. Your end users will have to lower their security setting for the entire mashup application in order to make authenticated requests.

Now, JSON-P isn't great with security, either. Whereas CORS is too restrictive, JSON-P is too permissive. If you enable JSON-P, then you pass auth credentials to the back-end server with every request. This may not be a concern for public content, but if an evil web site can trick you into going to their mashup instead of your normal mashup, they can steal information with your credentials. This is call Cross-Site Request Forgery, and is a a general security problem with Web 2.0 applications... and JSON-P is one more way to take advantage of any security holes you may have.

Performance

In addition, the whole CORS process seems a bit 'chatty.' Whereas JSON-P requires one HTTP request to get secure data, CORS requires three requests. For example, assume we had two CORS enabled applications (app1 and app2) and we'd like to blend the data together on a mashup. Here's the process for connecting to app1 via CORS and AJAX:

  1. Pre-Flight Request: round-trip from client browser to app1 as a HTTP 'OPTIONS' request, to see if CORS is enabled between mashup and app1
  2. Request: if CORS is enabled, the browser then sends a request to app1, which sends back an 'access denied' response.
  3. Authenticated Request: if cross-origin authentication is enabled, data is sent a third time, along with the proper auth headers, and hopefully a real response comes back!

That's three HTTP requests for CORS compared to one by JSON-P. Also, there's a lot of magic in step 3: will it send back all the auth headers? What about cookies? There are ways to speed up the process, including a whole ton of good ideas for CORS extensions, but these appear to be currently unpopular.

Conclusion: Use JSON-P With Seatbelts

If all you care about is public content, then CORS will work fine. Also, it's a 5-minute configuration setting on your web server... so it's a breeze to turn on and let your users create mashups at their leisure. If you don't create the mashups yourself, this is sufficient.

However... if you wish to do anything remotely interesting or complex, JSON-P has much more power, and fewer restrictions. But, for security reasons, on the server side I'd recommend a few safety features:

  • Validate the HTTP_REFERER: only allow JSON-P requests from trusted mashup servers, to minimize request forgery.
  • Make JSON-P requests read-only: don't allow create/modify/delete through JSON-P.

But wait, isn't it easy to spoof the HTTP referrer? Yes, an evil client can spoof the value of the referrer, but not an evil server. In order for an evil mashup to spoof the referer, he'd have to trick the innocent user to download and run a signed Applet , or something similar. This is a typical trojan horse attack, and if you fall for it, you got bigger problems that fancy AJAX attack vectors... DNS rebinding is much more dangerous, and is possible with any AJAX application: regardless of JSON-P or CORS support.

Links and Free Downloads

For those of you interested in Oracle WebCenter, I created a CrossDomainJson component that enables both CORS and JSON-P, and it includes some sample code and documentation for how to use it. It currently works with WebCenter Content, but I might expand it to include WebCenter Spaces, if I see any interest.

Meet Me in Toronto on Thursday!

For those of you in the Toronto area, I'll be presenting at the AIIM/Oracle Social Business Seminar this Thursday! Its at Ruth's Chris Steakhouse, 145 Richmond Street West, Toronto, ON. The agenda is as follows:

  • 10:00 a.m: How Social Business Is Driving Innovation, Presented by: John Mancini, AIIM
  • 11:00 a.m: Solving the Innovation Challenge with Oracle WebCenter, Presented by: Howard Beader, Oracle
  • 12:00 noon: Lunch and Networking, Table Discussions on Case Study Challenges
  • 1:00 p.m: Strategies for Success Case Study, Presented by Bex Huff, Bezzotech
  • 1:45 p.m: Final Remarks

Space is limited, so register now for a seat!

Mashup Standards Part 2: Cross-Origin Resource Sharing (CORS)

In my previous post, I was talking about the JSON-P standard for mashups. It's very handy, but more of a "convention" than a true standard... Nevertheless, it's very popular, including support in jQuery and Twitter. In this post I'm going to discuss what some consider to be the modern alternative to JSON-P: Cross-Origin Resource Sharing, or CORS for short.

Lets say you had two applications, running at app1.example.com and app2.example.com. They both support AJAX requests, but of course, they are limited to the "Same-Origin Policy." This means app1 can make AJAX requests to app1, but not to app2. Let's further assume that you'd like to make a mashup of these two app at mashup.example.com.

No problem! In order to enable cross-origin AJAX, you simply need to make sure app1 and app2 send back AJAX requests with this HTTP header:

Access-Control-Allow-Origin: http://mashup.example.com

This is easily done, by adding one line to the Apache httpd.conf file on app1 and app2:

Header set Access-Control-Allow-Origin http://mashup.example.com

DONE! Now, with standard AJAX calls you can host a HTML page on mashup.example.com and connect to app1> and app2 using nothing but JavaScript! There are about a half dozen additional Cross-origin HTTP header that you can set... including what methods are allowed (GET/POST), how long to cache the data, and how to deal with credentials in the request... naturally, not all browsers support all headers, so your mileage may vary!

Not to mention, because the XmlHttpObject is used, CORS has much better error handling than JSON-P. If there's an error accessing a file, you can catch that error, and warn the end user. Contract that with JSON-P, where there's no built-in way to know when you can't access a file. You can build your own error handling, but there's no standard.

Nevertheless, I still prefer JSON-P for mashups. Why? Well, it boils down to two things: performance, and security. I'll be covering the specifics in part 3 of this port.

Mashup Standards Part 1: JSON-P

In a recent project, I had a client who wanted to resurface Oracle UCM content on another web page. The normal process would be to use some back-end technology -- like SOAP, CIS, or RIDC -- to make the connection. But, as a lark, I thought it would be more fun to do this purely as a mashup. I would need to tweak UCM to be more "mashup-friendly" -- I'll be sharing the code (eventually) -- but first I needed to do some research on the best mashup "standard" out there.

UCM supports JSON, but that's not enough for a true mashup. The problem is that even though UCM can send back JSON encoded responses, you cannot access this data from a different web page. This is because of the "Same-Origin Policy" in AJAX. Basically, you can make an AJAX call back to the originating server, but you cannot make it to a different server. This is quite annoying, because then you can't "mash-up" UCM content onto another web page using just JavaScript. The best mashup APIs -- like Google Maps -- can't use AJAX because of this limitation.

Many developers consider this 'security' feature quite odd, because it's totally okie-kosher to include JavaScript from other people's web sites... so why not AJAX? Knowing full well that this was kind of stupid, some developers came up with a 'convention' for fixing it: "padded JSON," or JSON-P. This means 'padding' a standard JSON response with a callback, and then calling that callback function with the response. For example, if you called the PING_SERVER service with JSON enabled, with a URL like so:

http://example.com/idc/idcplg?IdcService=PING_SERVER&IsJson=1

You would get back the following JavaScript response:

{ LocalData: { StatusMessage: "You are logged in as sysadmin", StatusCode: 1} }

You would then use the standard AJAX XmlHttpResponse object, parse this JSON data, then do something with the message. My jQuery Plugin for UCM does exactly this... but of course has the limitation that it will only work on HTML pages served up by UCM. You can use fancy proxies to bypass this limitation, but it's a pain.

Instead, if UCM supported 'padded JSON', the process would be different. The URL would look something like this:

http://example.com/idc/idcplg?IdcService=PING_SERVER&IsJson=1&callback=processData

And the JavaScript would instead look like this:

processData({ LocalData: { StatusMessage: "You are logged in as sysadmin", StatusCode: 1} });

In this case, the callback=processData parameter triggers the server to 'wrap' the JSON response into a call to the function processData. Then, instead of using the XmlHttpResponse object, you'd use good old-fashioned remote scripting. Like so:

function pingServer() { var url = "http://example.com/idc/idcplg?IdcService=PING_SERVER&IsJson=1&callback=processData" var scriptNode = document.createElement("script"); scriptNode.src = url; document.body.appendChild(scriptNode); } function processData(ucmResponse) { var msg = ucmResponse.LocalData.StatusMessage; alert(msg); }

Notice how we define a function on the page called 'processData.' When the UCM response returns, it will call that function with our response data. The beauty here is that you can put this JavaScript on any web page in your enterprise, and connect directly with UCM with nothing but JavaScript. Pretty nifty, eh?

Now... JSONP is a good idea, but it's about 5 years old... A lot of newer browsers support a slightly different standard: Cross-Origin Resource Sharing. It's an actual standard, unlike JSON-P which is more of a convention... the purpose is to safely allow some site to violate the silly "Same-Origin Policy". I'll be covering CORS in part 2 of this post, including the security enhancement. But, in part 3 I'll explain why I still prefer JSON-P, provided you add some extra security.

2011, and the Decade to Follow

I knew that 2011 was a big year... but not until I saw the video above did I realize that so many events that will shape the decade to come all occured in the same year:

  • Tsunamis and nuclear disasters in Japan
  • Extreme weather worldwide
  • Revolutions in Egypt, Tunisia, and Libya
  • Rumblings of revolutions in Syria, Yemen, and Iran
  • Near economic collapse of the Euro zone, including riots in Greece
  • The death of three monsters: Osama Bin Laden, Gaddafi, and Kim Jong Il
  • The 99% 'Occupy' movement throughout Western countries
  • The passing of Steve Jobs

And countless other events and ideas and innovations that spread through the world like wildfire... It's not a cliche to say that we live in remarkable times.

"Immortal God! What a world I see dawning! Why cannot I grow young again?" -- Erasmus

"O my soul, do not aspire to immortal life, but exhaust the limits of the possible" -- Pindar

Happy new year!

Merry Christmas!

Sorry I haven't been blogging as much these days... But you can see why! A lot of end-of-year projects, and our new little girl. Here she is in her first holiday dress, meeting Santa for the first time... And looking a bit confused about the whole thing!

I'll blog next week... promise!

Open World 2011: WebCenter Presentations

I gave two presentations at Oracle Open World this month... one on Integrating WebCenter Content: Five Tips to Try, and Five Traps to Avoid! I broke it down into the big sections: contribution, consumption, metadata, security, and integrations. Special thanks to IOUG for sponsoring this talk!

My second talk was a case study based on a big project that completed recently, integrating WebLogic Portal, UCM, E-Business Suite, Autonomy IDOL, and a whole bunch of other stuff to make a global e-commerce web site. The client is in a highly regulated industry, and I was unable to get permission to use their name... but if you're curious about the details ping me!


If I missed you at Open World, I hope to see you at IOUG Collaborate 2012!

Running WebCenter Portal Pre-Built VM on Mac OSX

The WebCenter Portal team has put together a VirtualBox virtual machine to showcase the WebCenter Portal product. You can download it from Oracle. It's a big one: clocking in at 30 GB, so pack a lunch before downloading it.

The install instructions are pretty good for Windows and Linux clients... but if you're on a Mac (like me), it's missing one important tip. The file REAVDD-HOL-WC.ovf contains the information needed to import the files into a VirtualBox VM... but if you're running the free version of VirtualBox, it chokes on the import every time. The culprit is this line:

<SharedFolders> <SharedFolder name="Host" hostPath="D:\TEMP\Host" writable="false" autoMount="true"/> </SharedFolders>

If you're on Windows, and have a D drive, this works fine... but if you're on a Mac (and probably Linux), this will break the import. The fix? Use this XML instead:

<SharedFolders/>

And re-do the import... you'll need to re-set-up sharing once it's running. But at least now it will have a valid path!

NOTE: This is just meant to be a sandbox for testing integrations, and the like. It's not meant to be placed into a production environment... but, like all demo code, I'm sure I'l find it floating around in production eventually... and have to make it work.

WebCenter Mobile: PhoneGap and ADF Together at Last!

I was always a bit little skeptical about the initial mobile offerings for UCM and WebCenter. They never impressed me, because I felt strongly that these apps were fundamentally flawed in their design...

Why? Because they focused on being Mobile Applications instead of Mobile Web. The first time I held an iPhone, I noticed that it was running a browser that supported HTML5. The first Android was the same. This was at a time where HTML5 support was rare on desktop browsers, and few developers knew how to use it. Nevertheless, I predicted years ago that it would be the future... HTML5 was so powerful, that Flash and native mobile apps were unnecessary for 95% of applications. Many clients asked my advice on mobile apps, and my answer was always the same: "Skip native apps, and focus on the mobile web!"

This week, Oracle announced their next generation of the ADF Mobile toolkit... and (as I predicted) they are going the same route! Native code is no longer the focus: previously, you would create an ADF component, and it would be compiled down into native iOS or Android controls. No more! The next version will compile to HTML5 and be rendered in the mobile browser!

How can this be? With a technology called PhoneGap. It allows you to create your application in nothing but HTML5, render it in a browser, and still access native functionality (camera, location, files) with JavaScript functions. It's basically a wrapper around the built-in HTML5 browser, plus a plug-in library, which together give you an extremely powerful development environment. The next generation of ADF Mobile will be an ADF wrapper around PhoneGap, plus a few extra goodies (that I'm not allowed to talk about yet!). They call these hybrid applications because they are mostly HTML5, with a tiny bit of native code mixed in.

Well, what about those candy-coated user interfaces? How do I get those? The same way as always: mobile JavaScript toolkits. There are several available that can make very attractive interfaces, that render in any smartphone:

If you prefer to roll-your-own UI, I'd recommend Zepto as a minimalist framework instead...

What's next for the web, then? I believe that mobile application development will be the biggest driver for the adoption of HTML5 browsers. Yes, probably only 10% of mobile phones are HTML5-enabled smart phones... but people cycle through cell phones every 2 years. Compared that to the enterprise, some of which stubbornly refuse to upgrade from IE6!

I'd bet 90% of Americans will have a HTML5 mobile phone, before 90% of them are off IE6! Sad, but true... but good news for mobile developers!

UPDATE: Dang it! Just as soon as I blog about this, Adobe goes and purchases PhoneGap! What does this mean for Oracle? Tough to say... it's probably a good thing, since most of PhoneGap is open source. The only piece that's not Open Source is their nifty build engine. But, since Oracle already owns their own build engines (jDeveloper and Eclipse plugin), this is not a stumbling block.

UPDATE 2: It appears that Adobe has done "The Right Thing" and is submitting PhoneGap to the Apache group, and re-branding it as Project Callback. This will hep cement it as "the standard" toolkit for mobile app development.

The Oracle Non-Database?

Well, that was unexpected... Oracle has always been the gold standard for relational databases, but they are now throwing their hat in the "BIG DATA" ring with their new appliance... this "BIG DATA" stuff is also sometimes called NoSQL.

What's NoSQL? It's a software designed to manage very large data sets as key-value pairs, instead of in a relational database. Think big-giant-hashtable. The need emerged because some HUGE data sets needed management and analysis, but were so unstructured that it was impractical to put them in a traditional relational database. Really huge personalized web sites needed to create their own NoSQL solution, just to scale: Facebook made Cassandra, LinkedIn made Voldemort, Amazon made Dyamo, Google made BigTable, etc.

At Open World, Oracle announced several products in this area... One that I found interesting is an appliance based on the Hadoop database, which is used for analysis of HUGE amounts of unstructured data. I covered Hadoop's algorithms three years ago, warning that once JOINS were practical, it could threaten Oracle's database hegemony... but it looks like their Hadoop offering is a good blended mix: Hadoop for initial unstructured analysis, then migrate to Oracle for fast reporting.

I was also impressed with Oracle's new NoSQL offering, which is mainly for the storage of a massive amount of high speed, "lightly transactional" data. Think personalization, browser history, etc. This new offering is based on Oracle's existing BerkleyDB libraries: at 20+ years old, they're probably the oldest NoSQL database still in common use. In fact, a lot of the "big data" players are just wrappers around an array of BerkleyDB nodes. What makes Oracle's offering superior, is that it's just as fast as other NoSQL databases, but you can optionally add support for SQLite query support, instead of plain get/set methods. Also, you can add support for ACID transactions. Naturally, the queries will be much slower than if the data was in a RDBMS, and transactions will slow down the system... but at least it's available.

Not sure what the effects of this will be... especially since a great deal of these offerings are OpenSource! But, I'd wager that it won't take long for Oracle Applications to start using them in interesting ways...

The obvious ones are user tracking, event history, and business analytics. Previously, people would only track a fraction of user or system events, because it would be totally impractical to capture or analyze it. Amazon decided to do exactly that, and surprised everyone when they proved how much of a competitive advantage they were able to glean... With NoSQL, you could store way more data than before... and with Hadoop you can analyze way more...

Well folks, it looks like NoSQL has finally hit the mainstream! Can't wait for the inevitable Exa-doop server...

PowerPoint Tips from South Park

PowerPoint is a necessary evil... everybody is expected to give presentations in it, but few people are good at it. They cram too much information into one slide, and pack them full of data that might better go in a report. Presentations work best when used to persuade, it's an awkward tool when you try to educate. There's a reason PowerPoint was banned by the Pentagon:

"PowerPoint is dangerous because it can create the illusion of understanding and the illusion of control" -- Brig. Gen. H. R. McMaster

But alas... we're still stuck with PowerPoint... so we should probably make the best of it!

One of the ways to make PowerPoint presentations more compelling is to tell a story... unfortunately, most people are pretty bad at telling stories as well. There's an entire industry created around corporate storytelling that trains people how to engage your audience with a full-fledged story... but there's an even simpler approach. The creators of South Park stumbled on a formula that they still use to assemble stories:

These same rules can apply to making a PowerPoint presentation flow like a story.

You initially assemble your main points -- which is usually the hard part. Then, when assembling your points to tell a story, try to transition between your points with the word "therefore," or the word "but." Like so:

  • Slide 1
  • therefore...
  • Slide 2
  • but...
  • Slide 3
  • therefore...
  • Slide 4
  • but...

Simple, no? You'll be surprised how much better your presentations will "flow" from one point to the next with this method.

Naturally, not all presentations can fit into this pattern... for example, "Top 10" presentations flow numerically from one point to another... so if people doze off they can pick up the next chunk at the start. Also, there may be times where the dreaded "and then" transition is needed, such as when a point needs to be communicated over several slides.

Nevertheless, if you try hard to use better transitions, your story will be more compelling, and PowerPoint will be one notch less evil.

WebCenter/UCM Performance Tuning, Black Box Style!

I love doing performance tuning... It's typically a mundane process of tiny tweaks and digging for gold in log files, but for some reason I find it a blast. I usually do it for every client, and sometimes I have projects dedicated exclusively to tuning.

One cropped up recently, where a client was having craaaaazy slow performance with an 11g custom component that Shall Remain Nameless. It worked fine with small data sets, but on a page with 1000 items, the load time was 12 minutes. Thus begins our adventure! But first, some wise words from the grandfather of performance optimization:

"We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil" -- Donald Knuth

Yarp... The first thing to do is establish a benchmark. Run some kind of automated test to time performance, and then start your tweaking. Also, be sure to put it in a nice table, so you can visually see just how awful the starting point is!

It's kind of a dull process-of-elimination to disable components, restart, and run another benchmark... but you have to do that in order to narrow down the offending code. I initially thought the problem was in one component... I was wrong... So, as boring as it is, it must be done to eliminate red herrings.

Once I found the offending component, I turned on some low-level tracing parameters. You can do this from the System Audit Information administration page. I usually turn on Verbose tracing, as well as these sections: requestaudit, pagecreation. When I did a small manual test, what I got back shocked me a bit:

pagecreation/6	09.13 17:55:47.774	IdcServer-31333	page generation took 10766 ms; gets 58863 funcs 24362 incs 9256 eval 160
requestaudit/6	09.13 17:55:48.012	IdcServer-31333	FOO_SERVICE 14.5684232711792(secs)

Woah... even a simple page was taking 14 seconds, 10 of which was in rendering the IdocScript! I checked, and these pages were far too complex for words. On a page with a 1000-item list, the component that Shall Remain Nameless was spitting out 15 Megs of HTML! Danger! Danger!

The fix? I used some of my tips from my web site performance tuning presentation, of course. I made a new custom interface, and leveraged the YUI Library that's built-in to UCM... specifically the DataTable widget. By bye IdocScript, hello JavaScript! That slimmed down the page from 15,000 KB to a manageable 300 KB, and all but eliminated the page generation time.

Problem solved, right? Not so fast... even with that the site was still slow. Next I suspected something in the database: bad histogram, missing index, whatever. So I turned on another low-level flag: systemdatabase. Please note: enabling this will turn your log files into a frigging Russian novel, so don't leave it on for too long... I ran another test, and got something like this:

systemdatabase/6	09.14 15:13:36.548	IdcServer-18413	53 ms. SELECT FOO_QUERY [Executed. Returned row(s): true]
systemdatabase/6	09.14 15:13:58.803	IdcServer-18413	Closing active result set
systemdatabase/6	09.14 15:13:58.804	IdcServer-18413	Closing statement in closing internals

Well... I was wrong again! Can you see the issue here? The query only took 53 milliseconds to run, so the database is hunky dory! But, it takes a full 12 seconds for UCM to release the connection. That means, somewhere deep inside the component that Shall Remain Nameless, some Java code is taking 12 seconds to post-process the database results. In some scenarios, it took over 40 seconds just to post-process the results.

What the heck is it doing???????

Don't know, don't care! Instead of digging through the component's code and trying to optimize that 40 second delay, I did what any sane web developer should do when faced with a poor performing black box: make a data cache! After quite a bit of digging, I found the magic function that was so very important that it took 40 seconds to run. So instead of calling it directly, I wrapped it up in a secure data cache. That cut the 40,000 milliseconds down to 4. You sometimes see stale data on the page, but that's always the case with a web interface... so it's usually a good tradeoff.

So... after improving their performance by 5 orders of magnitude, I patted myself on the back, and flew home...

The final benchmark numbers looked amazing... and I think that's why I find performance tuning is so satisfying. It's so very lovely to see the difference between the initial and final performance... and it's even better if you need to use a logarithmic scale to see them! ;-)

You Are Not So Smart!

I've never heard of a trailer for a book before... but this one's pretty good. It's for You Are Not So Smart, which should be out in a few months:

He brings up some good points... Your future self is not to be trusted! You must trick him into doing your bidding if you hope to accomplish a lot in a day. I for one have fallen way behind in my blog post frequency... I hope there are some tips for beating writer's block as well ;-)

The animation people did a great job... it reminds me of the origin of the credit crisis movie... it's only 10 minutes long, but highly recommended if you're curious about what the heck happened on Wall Street back in 2008.

New Book: ECM and Electronic Health Records

Your truly contributed to yet another book on ECM this year, which has finally been published: Enterprise Content Management and the Electronic Health Record.

The majority was written by a former colleague, Sandra Nunn. My contribution was about the technical challenges of ECM and Records Management when it came to medical records.

Thanks to cash incentives built in to the HITECH Act, hospitals can benefit significantly by moving to electronic medical records... but it must be done carefully! These systems need to be quick, pleasant, and nearly foolproof. I've seen quite a few, and I wish just one of these EMR/EHR software companies would invest heavily in some User Experience developers... pretty doesn't equal usable.

I've said it before, and I'll say it again: doctors are busy and expensive: forcing them to use outdated systems doesn't save any money. It just shifts costs around, and makes the hospital less efficient. In fact, one Harvard Business School study showed that most hospitals saw zero cost savings from electronic medical records! How can this be?

"One reason computerization may not be improving efficiency and quality of care: many medical software programs are designed primarily to help hospitals with their billing, accounting and registration needs, not their clinical work."

This could just be a symptom of a greater problem: if you focus on accounting rather than your core competency (healing people), things inevitably go astray... or, as Larry Ellison once screamed loudly, "your accounting procedures will never be your competitive advantage!" They are important things; but not the most important things.

In any event, if you're in the medical industry, be sure to check out this book... or others in the AHIMA library. And if you want to improve the quality of health care, or lower the cost, be sure to focus on clinical results: not accounting results.

Oracle Open World!

Open World is barely a month away! I'll be heading there early for some Oracle ACE briefings and the like... I'm normally a "broadcast only" Twitter user, but when I'm at conferences I check it all the time, and tweet with location services on. If you want to meet up, just message me!

@bex

I have a couple of sessions this year... unfortunately they are all on Thursday! Dang it! I was hoping to leave the conference early -- since Michelle and I are having our first kid, and her due date is a few weeks after Open World. Alas, the scheduling gods were not with me:

  • Session: 10843
    • Creating a Global E-Commerce Site with Oracle E-Business Suite and Oracle Fusion Middleware
    • Thursday, 12:00 PM, Intercontinental - Intercontinental Ballroom B
  • Session: 9539
    • Integrating ECM into Your Enterprise: 5 Techniques to Try and 5 Traps to Avoid
    • Thursday, 03:00 PM, Intercontinental - Telegraph Hill

I know picking Open World sessions can be a bit of a baffling ordeal... so if you're pressed for time, I'll suggest a few tips. If you want to see WebCenter based content, check out the WebCenter partner sessions. Lots of good stuff there. If you're curious about non-WebCenter products but don't know where to start, I'd recommend the Oracle ACE sessions over just about everything else. ACE sessions are a good bet: speakers are usually very knowledgeable, very passionate, and very excited to share what they know. Translation: minimal marketing fluff. You don't get the title "Oracle ACE" by being a self-promoting fool!

Well... at least most of the time Oracle ACE's aren't self-promoting fools... there are exceptions.

The 2011 Bezzotech Product Line

Yes it's true... we're making real, shrink-wrapped software products now. Our products are add-ons to the WebCenter Content suite (formerly UCM, formerly Stellent) that we hope will help a broad number of existing and future customers. You can lean more on our products page or by emailing us, but a brief run-down is as follows:

  • Localization Suite: Have trouble managing sites in multiple languages? Whether it's an existing site, or you're starting from scratch, our suite helps manage your translated content (with optional Lingotek connectors for translation management)
  • Enterprise Storage: Content management gets a lot different when scaling to 100 million or more documents. You need a better file storage plan, a better recovery plan, and frequently WORM support. This module can help you out.
  • ECM Expert Support: We know the product, because we wrote the product! Get fast answers directly from the experts. We're not afraid to support VM Ware or custom components.
  • Content Conversion Scripts: Do you need to upgrade to Site Studio 10gr4? From 10gr3, 7.5, or even Content Publisher? We have a toolkit that helps us migrate from where you are to where you want to be!
  • FlexGrid Solution: A customizable and personalizable "faceted search" to help your users "drill down" to find their content in a way that makes sense for them.

Don't worry... we still have a good number of free WebCenter Content samples as well. Be sure to check those out: I've heard from more than one customer that our free stuff is better than what other people charge for ;-)

More On Fatwire

Last week Oracle did their "official" presentation on the FatWire acquisition. I was on-site with a client and had to miss all the fun, but it's available online. Billy has some pretty good posts on the presentation and the Q&A. He even took screen captures of the text responses... Jeez, no wonder they were so careful about their wording!

Existing Site Studio customers have the flexibility to decide if they want to move to the WebCenter Sites offering, maintain their Site Studio deployments, or integrate both solutions.

Billy thinks that means everybody will have to (eventually) upgrade from Site Studio to FatWire... because every software company sings the same refrain eventually:

  1. we've got nifty features in product ABC
  2. we will not be adding these features to product XYZ
  3. if you're on product XYZ, you can live without the features, add them yourself, or migrate

I think it's a little more complicated than that... Sure, FatWire may be the preferred front end for general web sites, but it's still going to need back-end content management. So it's not going to be a rip-and-replace kind of thing. If people have already paid the upgrade price from Site Studio 10gr3 to 10gr4, it should be relatively easy to "surface" existing WebContent in FatWire. Not to mention PDFs, executables, images, videos, etc., that probably need back-end content management. So where should FatWire end, and UCM begin?

That always boils dow to the same question: where is the line between structured and unstructured content?

Oracle is pretty tight-lipped about this kind of stuff, so don't expect a firm answer until the first version of WebCenter Sites hits the shelf...

Would You Like 985% ROI?

According to a new study by Forrester Research, that's what some companies are getting by implementing Oracle Real Time Decisions (hat tip Manan Goel). This was a case study comissioned after an independent group at MIT discovered that firms employing metrics-based decision making are 5-6% more productive on average. This includes metrics such as asset utilization, and return on equity. This can mean big money to big companies, and in this particular case delivered 985% return-on-investment.

Dang...

For those who don't know Real Time Decisions (RTD) is an analytics engine in Oracle business intelligence stack. It's primary goal it to answer the question: what is the next best step? It uses just about anything as a data source, and as long as you can create a feedback loop to determine which action was "best," it will slowly tune the system to recommend what happens to be working best at the time.

For example, assume you have 5 banner ads on your home page about promotional products and services. Now... people find you home page from any number of references, or Google search terms. Which banner ad should you show them? Well... obviously the best one to show them is the one that they're most likely to click. Clearly if they search for "discount chairs" you might want to show an ad for discount chairs... but maybe for some odd reason "discount tables" or even "floor wax" gets more clicks. Using RTD, you can tune your system to react dynamically to what is currently popular with people who search for those terms.

From an e-commerce perspective, this boils down to cross-selling and up-selling opportunities that don't annoy the customer. First, it lets you create a customer "profile" -- not based on what the clicked in their profile, but according to where they live, the color of their hair, previous purchases, etc. Based on that, plus a feedback loop on future purchases, you can show targeted ads that are proven to deliver results.

Well, OK... so maybe one company got 985% ROI, but will you? Well, that depends a lot on the quality of the data you have in the first place. Or, as we say in the software word: garbage-in, garbage-out. And collecting that data can be a lot of up-front work. But if you collect the right stuff -- and are diligent about managing it -- you could see that 5% boost as well.

In any event, at least now we have some pretty solid evidence that you don't need to be eBay or Google to get value from data-mining... even mid-sized companies these days have some pretty rich data sets... if you know what to look for ;-)

Adventures in Seattle Tech Meetups: Part I

I've been in Seattle for about a year now... because of my travel schedule, I haven't had a chance to do much networking with the local tech community. Until last night, that is!

I asked some of the guys from Minne* back in Minneapolis if there was anything like their group out here in Seattle. Graeme pointed me to a few twitter feeds and stuff to get started.

I decided to follow Seattle 2.0 on Twitter, which pointed me to the Hops-and-Chops happy hour. Thursdays at 7pm at the Auto Battery, a bar that Yelp tagged as "LOUD". Hrm... I wasn't quite sure what to expect when I arrived: how will I know it's them??? Luckily there was one big table with skinny 20-somethings, and I overheard the words "Ruby" and "Rails", so I introduced myself.

One of the guys -- Leo -- told me about his experience the first time he came there. His technique was to wear a painfully geeky t-shirt and walk around until somebody invited him to have a drink. Nifty trick...

The Hops-and-Chops crew appeared to be 70/30 blend of geeks to entrepreneurs. Two guys there said this was typical, and also pointed me to Seattle Lean Coffee... but they warned that there weren't going to be very many developers at that one: more entrepreneurs and "connectors". Not surprising... I'd wager these factors kept the techies away:

  • morning meeting
  • mandatory pants
  • no beer

The Hops-and-Chops guys are also having a BBQ on Monday as well, and with luck it wont be rained out! Hopefully I'll eventually stumble upon something more like Minne* around here. My first reaction to the Seattle scene is that there appear to be tons of miscellaneous meetups and not much central co-ordination. A few dozen folks here and there, and maybe a big event with a few hundred... Meanwhile, back in Minneapolis, the last MinneBar un-conference sold out 1200 tickets!

The Seattle Bar Camp was held a few weeks back. I was bummed to miss it... But, since I was in Budapest at the time, I had a good excuse!

If anybody has any other recommendations, please leave it in the comments... then hopefully there will be a Part II in this series!

CORRECTION: looking around I now think they were pointing me to Open Coffee and not Lean Coffee... I'll check out both of 'em just to be safe.

Has Oracle MIX "Suggest-A-Session" Jumped the Shark???

Jump The Shark: (verb) a term to describe a moment when something that was once great has reached a point where it will now decline in quality and popularity.

Oracle MIX is a social software app to connect people in the Oracle universe. It was launched back in 2007 by The Apps Lab so people could network and stay connected during (and after) Open World. It was at the time the largest JRuby on Rails site out there. It's a decent site, and you Oracle monkeys should check it out...

I believe in 2008, they decided to try something new: allow the community to "suggest a session" for Open World. They had ten slots at Open World, and everybody was encouraged to submit a session for consideration, and vote on what they liked. The ten sessions with the most votes would get to present at Open World.

This was also a great idea... It was the ideal place for sessions outside the mainstream to get a voice at Open World... technology that might be too "bleeding edge" for a general audience, but is the bread-and-butter of geeks who only hit one conference per year. Social software, mashups, open source, installing Oracle on a Roomba... you get the idea. If you want to do a mainstream talk about a mainstream product, then submit it through the normal channels to the Open World committee... If your session isn't picked, then it probably wasn't good enough.

This model worked fine in 2008, 2009, and 2010... but I think something went really REALLY haywire this year...

MIX, being an open community, allowed people to take the voting data and mash it up in interesting ways... Greg Rahn over at Structured Data did exactly this, and presented his data analysis of the votes. Just looking at the data I saw a lot of anomalies, but to me the smoking gun is this:

  • Number of users who voted for exactly one author: 828
  • Number of users who voted for ALL sessions by EXACTLY one author: 826

Well, that ain't right... once you dig further, you see what probably happened: the Oracle MIX community has been invaded by a spammer...

Specifically... somebody out there has a mailing list with a few hundred people, and contacted them all asking for votes. Probably repeatedly. I don't know about others in the MIX community, but I personally got three such emails begging for votes... One of them was so shady it probably violated Oracle's Single-Sign-On policy. The line between self-promotion and SPAM is fuzzy... but it was clearly crossed by a lot of people this year.

I know what you're thinking... must be sour grapes, eh? But no, I did not submit a MIX session. Oracle was kind enough to approve both of my Open World presentations this year, so I thought the gracious thing to do would be to leave the MIX sessions for the community... so I'm very disappointed in the behavior of these people.

The rules as-is are broken... based on Greg's data, 200 people at Microsoft could all vote for sessions like "Reason #6734 Why Microsoft Rocks and Oracle People are Big Fat Stupid Heads"... and they'd win every slot.

All communities have this problem... once they become popular, they become valuable. Once they become valuable, some people try to extract more value than their fare share. Many large sites implement some form of moderation or karma points to keep cheating to a minimum... I think it's about time MIX did the same. I have a few ideas for "guidelines":

  1. promotion via tweets and blogs is allowed and encouraged
  2. mass communication via emails or social networks will be considered "social spam," and grounds for disqualification
  3. "down-voting" like Digg should be enabled to further prevent spammers from carpet-bombing their way to the top
  4. sessions should be outside of mainstream Oracle talks: sessions similar to ones given at Open World are discouraged
  5. a maximum of two talks can be submitted on behalf of an individual, organization, or community group
  6. a maximum of one talk can be selected on behalf of an individual, organization, or community group

Of course, this isn't perfect... the top 10 slots could still go to people with 1000 employees, and therefore 1000 reliable votes! Probably the ideal situation is to randomly select some Oracle ACEs to be the judges every year, based on community input. Not ideal, but really hard to rig...

So... how many of you feel like you were "spammed" this year?

UPDATE: Oracle is soliciting opinions for what worked and what didn't this year. If you have an opinion about what should be fixed, please leave a comment on their blog or contact Tim Bonnemann directly.

Recent comments