Thursday, August 28, 2008

AMAZING TECHNOLOGY



there is much i want to write about this, but for now, suffice it to say.

1) when he was showing the book 'Bleak House', by Dickens [1:20--1:45], in the upper-left hand corner there was a red fractal [1:18]...and i'm wondering how deep that fracal goes...i'm willing to bet that he could scan indefinately, and it's made up of all the mandlebrot images found on the web...amazing, and the technology can find the relative position within the fractal framework for each image...amazing stuff...but we don't see it all in this demo...i get the feeling he's just scratched the surface...also, imagine using Amazon Web Services and Mechanical Turk together to reconstruct certain landmarks etc., because it was done computationally, but those computations take time and resources. Obviously, Microsoft is developing the core technology for licensing, but is not going to literally go out and process every single landmark throughout the globe...or would they? is there opportunity here for the entrepreneur or not?

2) how does this relate to fractal transactions?

Fractal Transactions: Launching a New Era in the Future of Money

3) The mapping looks very flat in comparison

4) The Nostradam section [4:16--4:45] is absolutely unbelievable...especially when the algorithms catch the picture of the picture of the nostradam [5:19--5:21]...very amazing...

Quick SMashup Update

I've been working at a feverish pace to complete some current projects. I'll have news about those projects soon, but for now I have to keep it under wraps. Nevertheless, at least we can discuss SMashups. I currently have two associates helping me to clean up the three articles i've published so far, while i'm working to try to write up the rough-draft on the final three sections...


In other news, I cannot believe that I forgot to post a link to such an important article:

Appeals Court Overturns Injunction-Denying Open Source Ruling


This spells good news for developers. Consider this, with respect to Mashup and SMashup developers--Microsoft estimates that there are over 100 million people on the web that do some level of programming (html, javascript, maybe vba, complicated excel macros) but that there are only about 19 million 'professional' programmers. This means there there is an untapped potential of about 80 million 'quasi-developers' that can power Web 2.0--Mechanical Turk on Steroids.

So this means that we can expect to find ways to cache in on developing SMashups. For example, one simple revenue model is to build a gadget that becomes popular, and then tie that gadget into online advertising (search, banner, or adSense for example). Therefore, if millions of people install your gadget, then you get money when they open your gadget and view the ads...

new businesses will be built around that model, and other similar models. The key to success, however, is diversity of application. Therefore, we will see more and more companies developing tools that any sufficient 'power user' (someone that would maybe program an excel module, or write some simple javascript) could create an application and share it with the world easily.

One example is Microsoft's popfly. PopFly allows developers to develop 'blocks' of code. developers can then connect these blocks to create new applications. for example you could:

1) Drag out an 'image scraper' block
2) attach the input to some url (presumably a url whose page contains many img tags)
3) drag out a 'fancy slideshow' block
4) connet the output from the 'image scraper' to the input of the 'fancy slideshow'

walla, you have a new 'Mashup' in Popfly....here's a simple example using a flicker block and a map block.



PopFly even works with Facebook. Of course, Microsoft does own a 1.6% interest in facebook, but we'll save that discussion for another post. i also want to remember to discuss my thoughts concerning Facebook Valuation, and the valuation of social networks in general.

Monday, August 18, 2008

Sunday, August 17, 2008

SMashups - PART III

SMashups: Scalable, Server-less, Social Mashups for Web 2.0
Part III: Restoring Context:
Building a Simple OpenSocial XMLHTTPRequest Object
---------------------------------------------------------------------

let's skip all the pontification, and get right to the code. what we are going to attempt to do is build our own simple version of xmlhttprequest, we'll call it nolyXMLHTTPGet, and we'll keep it simple to highlight the main issue of this article, specifically with regard to opensocial api. with that in mind, consider the following object:



function nolyXMLHTTPGet(url, onSuccess, onError, params){
//initialize parameters
this.url = url;
this.onSuccess = onSuccess || function(response){return true;};
this.onError = onError || function(response){return false;};
this.response = null;
this.incall = true;

var _params={};
_params[gadgets.io.RequestParameters.METHOD] = gadgets.io.MethodType.GET;
_params[gadgets.io.RequestParameters.CONTENT_TYPE]=gadgets.io.ContentType.DOM;


//call makeRequest, passing in __internalHandler as the callback
gadgets.io.makeRequest(url, this.__internalHandler, _params);

//this is the internal handler
this.__internalHandler = function(response){
//flag that we are no longer in call
this.incall = false;
this.response = response;
//process response
if(response.errors.length > 0){
this.onError(this);
}
else{
this.onSuccess(this);
}
}
}




so now we have a very simple xmlhttprequest object that tries to wrap the makerequest, and make it a bit more friendly. in this example, the constructor accepts a url, onSuccess and onError handlers, and parameters. besides the event handlers, the object has four properties: url, params, response, and incall (to signal that the object has a call open and is awaiting a response from opensocial). the object also has one '__internalHandler' function, which is used to detect an error or success and make the approrpriate callback. also notice that the callback handlers both pass references to 'this', the instance of the object.



the logic is simple, but for the newbies who may be lurking, i'll give at least a summary explanation. the object first initializes some internal properties based upon the input paramaters, flags that it is in a call, and then invokes gadgets.io.makerequest. note that the callback is to the internal function, which allows us to turn off the incall flag, load the response into an internal property, and then inspect the response to raise the appropriate callback handler (error or success). also note that we pass the instance object along when we invoke the outer event handler.



elegant, wouldn't you say?



...only problem is, it doesn't work. the problem is that when the makeRequest returns, the system cannot even find this.__internalHandler because 'this' is all out of context on the return call. in point of fact, 'this' actually points to the iframe that contains most of the OpenSocial container's code. so context is definately all out of whack. but, this is where we just take a slightly more complex, but assuredly successful approach. we're going to preserve context ourselves :) i wish i could remember where i learned this trick because i'd like to cite them, but i didn't take notes on everything in the earliest days of my development, i was just trying to get this stuff to work.



this is the vodoo that we didn't have to do with ajax...so pay close attention newbies :)



function nolyXMLHTTPGet(url, onSuccess, onError){
//set parameters for this object
this.url = url;
this.onSuccess = onSuccess || function(context){return true;};
this.onError = onError || function(context){return false;};
this.response = null;
this.incall = true;

var _params={};
_params[gadgets.io.RequestParameters.METHOD] = gadgets.io.MethodType.GET;
_params[gadgets.io.RequestParameters.CONTENT_TYPE]=gadgets.io.ContentType.DOM;


//encapsulate 'this' object for context restore
var _context = this;

//create a virtual handler function variable
var _virtualhandler = function(response){
//inside virtual handler, calls instance handler
_context.__internalHandler(response,_context);
}; //note this semi-colon, which is required (easy to forget)

//invoke makeRequest, passing in virtual handler for callback
gadgets.io.makeRequest(url, _virtualhandler, _params);

//now we have an internal handler that accepts response and _context
this.__internalHandler = function(response, _context){
//this is just for fun
var _pop = _context;
//flag that we are not in a call
_context.incall = false;
_context.response = response;
//process response
if(response.errors.length > 0){
//call onError event handler
_context.onError(_context);
}
else{
//call onSuccess event handler
_context.onSuccess(_context);
}
}
}



the comments pretty much explain what's going on. essentially, because we have absolutely no real context when we return, we simply pass the context along by wrapping a more powerful callback inside our new virtual function. therefore, when the makeRequest returns and calls the virtual handler, we can restore context internally--instead of using the 'this' keyword, we are using '_context', which holds a pointer to the original instance object where the call was initiated. even if we initialize and run 10 of these one at a time, and they all come back at different times, the context helps us to make sure it all gets sorted out and values assigned to the proper instance objects.



so, how do we use this new object? consider the following code:



function main(){
var url = "http://www.abc.com/api?p1=1&p2=2";
var osh = onSuccessHandler;
var oeh = onErrorHandler;
var myHTTP = new nolyHTTPGet(url, osh, oeh);
myHTTP.onSuccess = doOnsuccess;
myHTTP.onError = doOnError;
}
function doOnSuccess(_context){
alert(_context.response.data.xml);
}
function doOnError(_context){
alert(_context.response.errors[0]);
}



But what have we gained? still, our code does not look so much different from the first lines of code we wrote using makeRequest directly. granted, we do have onSuccess and onError now, which is really helpful, but to make this a really powerful object, and easy to use, we'll need to come up with a much more comprehensive object model... for instance, consider the following class as an example (please note that methods with two underscores __ are internal only and use context):



function nolyXMLHTTP(url, opt_onSuccess, opt_onFailure, opt_onTimeout, opt_timeoutms){
this.name = name;
this.url = url;
this.onSuccess = opt_onSuccess || function(_context){return true;};
this.onError = opt_onError || function(_context){return false;};
this.onTimeout = opt_onTimeout || function(_context){return false;};
this.timeoutms = opt_timeoutms || 10000; // 10 seconds
this.params = {};
this.refreshInterval = 1000; //refresh every second
this.GetXML = function(opt_onSuccess, opt_onError, opt_onTimeout, opt_timeoutms){
this.onSuccess = opt_onSuccess || this.onSuccess || function(_context){return true;};
this.onError = opt_onError || this.onError || function(_context){return false;};
this.onTimeout = opt_onTimeout || this.onTimeout || function(_context){return false;};
this.timeoutms = opt_timeoutms || this.timeoutms || 10000; // 10 seconds
var _params={};
_params[gadgets.io.RequestParameters.METHOD] = gadgets.io.MethodType.GET;
_params[gadgets.io.RequestParameters.CONTENT_TYPE]=gadgets.io.ContentType.DOM;
this.params = _params;
this.__makeCachedRequest();
}
this.postXML = function(postdata, opt_onSuccess, opt_onError, opt_onTimeout, opt_timeoutms){
this.onSuccess = opt_onSuccess || this.onSuccess || function(_context){return true;};
this.onError = opt_onError || this.onError || function(_context){return false;};
this.onTimeout = opt_onTimeout || this.onTimeout || function(_context){return false;};
this.timeoutms = opt_timeoutms || this.timeoutms || 10000; // 10 seconds
var _params={};
_params[gadgets.io.RequestParameters.METHOD] = gadgets.io.MethodType.POST;
_params[gadgets.io.RequestParameters.POST_DATA] = postdata;
_params[gadgets.io.RequestParameters.CONTENT_TYPE] = gadgets.io.ContentType.DOM;
this.params = _params;
this.__makeCachedRequest();
}
this.__makeCachedRequest = function() {
var ts = new Date().getTime();
var sep = "?";
if (this.refreshInterval && this.refreshInterval > 0) {
ts = Math.floor(ts / (this.refreshInterval * 1000));
}
if (this.url.indexOf("?") > -1) {
sep = "&";
}
this._new_cached_url = [ this.url, sep, "nocache=", ts ].join("");

var _context = this;
var callback = function(response){_context.__internalHandler(response,_context);};

var _ontimeout = function(){_context.__timeoutMonitor(_context);};
this.inCall = true;
setTimeout(_ontimeout, _context.timeouttms);
gadgets.io.makeRequest(_context._new_cached_url, callback, _context.params);
}
this.__internalHandler = function(response, _context){
if(!context.inCall){
_context.response = response;
return;
}
_context.inCall = false;
_context.response = response;
if(response.errors.length > 0){
_context.onError(_context);
return false;
}
else{
_context.onSuccess(_context);
return true;
}
}
this.__timeoutMonitor = function(_context){
if(_context.inCall){
_context.onTimeout(_context);
}
}
}



Having this object allows us to simplify our code for handling request, making the situation similar to using the standard XMLHTTPRequest object. For instance, we could do the following:



function main(){
var _url = "http://www.rhapsody.com/api/somecall=1";
var _timeoutms = 5000;
var _onSuccess = function(_context){
alert(_context.response.data.xml);
return true;
};
var _onFailure = function(_context){
alert(_context.errors[0]);
return false;
};
var _onTimeout = function(_context){
alert("Timeout for " + _context.url);
return false;
};
var aCall = new nolyXMLHTTP(_url)
var aCall.GetXML(_onSuccess, _onFailure, _onTimeout, _timeoutms);
}



Of course, we can still expand upon the object and make it a bit more flexible. For instance, we could create a method that allows us to pass in an XSL document and an output DIV object and do a 'FetchAndTransform" that would make this very easy to use. I'm going to create a new open-source project on google code so that we can start building out this object. but for now, i'll end part iii here, and get started with part iv of the series.



END OF PART III

Adding a Gadget to your blog

Blogger Buzz: Spice Up Your Blog with Google Gadgets!

adding a gadget to your blog is very simple. very soon, we will see hundreds of new gadgets designed specifically to run in blogs, combined with 'extensions' that also run in other contexts, such as in social contexts such as MySpace, orkut, Ning, etc., and also in regular gadgets at iGoogle and many other OpenID sites that integrate OAuth...

...get started today and add a gadget from the directory. then, get ambitious and create your own test gadget. i'd be happy to collaborate. just post a link or comment.

Saturday, August 16, 2008

Javascript Blogger API

Blogger Developers Network: Blogger JavaScript Client Library Released

I was looking around for some sample code and found this library, which was published in October, 2007. This is an example of a VERY comprehensive offering, that provides Javascript client access to Blogger Data API, Calendar API, and full authentication and everything. I'm not sure yet if it runs within the gadgets, but this looks to be a very good fit. I'll share my thoughts and analysis on this package going forward.

Monday, August 11, 2008

Great Day at Blogger!

you cannot imagine how excited i was yesterday to find out that blogger added support for gadgets. you see, originally i was just working on writing OpenSocial gadgets, and testing stuff out in orkut (and myspace now as well). but most of the opensocial platform is built upon gadgets (e.g. google.io.makeRequest), which is what these articles are about. anyway, all i had to do was comment out the code that refers to the OpenSocial namespace and the gadget worked perfectly!

of course the size is all wrong. i was working on the gadget for orkut, which has different layout. i guess my next priority, after i finish this series of articles, is to create another article about makign the gadget 'container-size' aware. this should be fun.

in any event, thanks a lot blogger for getting this up. now, get crackin' and bring us opensocial via friendconnect? ;) also, i just want to comment that i'm very excited about the Google Social Graph API and also XFN and FOAF.

i can't wait to see how all this plays out over the next year. go opensocial. go google! and thank god for OAuth and OpenID :)


"There is a tide in the affairs of men.
Which, taken at the flood, leads on to fortune;
Omitted, all the voyage of their life
Is bound in shallows and in miseries."

Julius Caesar [iv.iii.218–221]

that has pretty much been my favorite quote now for as long as i can remember having a favorite quote. but i've been waiting 10 years to be able to find the appropriate time to quote the rest...as brutus continues...

"On such a full sea are we now afloat,
And we must take the current when it serves,
Or lose our ventures."

Julius Caesar [iv.iii.222-224]

thanks
nolybab praetorius

SMashups - PART II

SMashups: Scalable, Server-less, Social Mashups for Web 2.0
SMashups - PART II
Getting Out of the Sandbox
--------------------------
first, we need to understand why getting out of the 'sandbox' is even important. most of the mashups out there in the wild today are nothing more than web-pages, driven by sophisticated back-end web servers that cull information from many different web-services into a single page. Some more sophisticated mashups are now starting to evolve that have AJAX interfaces, dynamically driven by connecting to the mashup server. today we see impressive automatic, real-time updating tickers on stocks, for example; or, we can collaborate live on documents using only a webbrowser, and even combine web services from many different web service providers to create a single 'mashup' view of the web.

but therein lies the weakness to scalability. with a mashup, when the client needs information from several web services for a single request, the client must first go to a 'single' server that is programmed to converge the various data streams into a single response to the client. this wrecks havoc on server systems, and detracts from scalability. i have read many articles about a very popular mashups crashing DNS systems, or consuming massive bandwidth and processing power. in technical terms, a bottleneck.

what happens when there are thousands, millions, or even tens of millions of clients? i have read articles about mashup servers grinding to a halt under the load, or bringing down some unsuspecing ISP DNS system. Sure, the server works fine responding to clients, and actually scales quite well. again, the problem is that for each client request, the server has to query many external servers on the back end, multiplied by the number of users accessing the system at any one time; quickly tapping all system resources.

in an ideal world, mashups would not need to go through the bottleneck server. rather, the mashup application would reside on the client, and the client would gather all of the web-service resources from the various servers out in the web, and then integrate the results right there on the client--but such a thing was previously not possible. why? well, i'm sure you probably already know the answer to that question, but for those newbies out there, i'll take some time to explain, briefly.

while AJAX is all the rage now days, the technology itself has been around for well over 10 years now, just silently taking root throughout the web. in fact, ajax as a term wasn't even used before 2005 (see jesse james garrett's original article here: http://adaptivepath.com/ideas/essays/archives/000385.php). however, i was using these same technologies in 1999. in fact, there were others who introduced me to the technology who had been using it for some time before then. my point is that ajax isn't really all that new. what was needed for the concept to really go mainstream was the name. also, just so everyone knows, AJAX is NOT an acronym. it's a big debate, and i just wanted to weigh in with my opinion on the issue. while i do not want to rehash all the information in his article, i do want to allude to something that jesse james wrote at the end of his article: "The biggest challenges in creating Ajax applications are not technical. The core Ajax technologies are mature, stable, and well understood. Instead, the challenges are for the designers of these applications: to forget what we think we know about the limitations of the Web, and begin to imagine a wider, richer range of possibilities."

back to the point of this article. although ajax is not really a single technology itself, but rather a blend of technologies, everything aside, the most important aspect of ajax is the xmlhttprequest, which allows programs to post data back through the website, without having to refresh the entire page to make a server trip. this allows developers to send smaller snippets of data, and update portions of the screen using dynamic data, drastically improving the user experience. but xmlhttprequest came with a string attached. namely, one could only post an xmlhttprequest back to the same domain where the original page was loaded from. effectively, this 'sandbox' protects the user. in other words, the xmlhttprequest object FORCES the client to use just a single server, creating the mashup bottleneck dilemna. furthermore, because truly effective mashups use web services from several servers, the more sophisticated mashups actually exponentially increase the demands on the server. so, wouldn't it be great if there were a way to let the client collect the data from the various sources directly, without having to go through a bottleneck mashup server first? "Ah! But Wait!", i hear you cry, "the whole point of the 'bottleneck' server is to get around the browser 'sandbox', because clients cannot make XMLHttpCalls to foreign domains. but it's just not so, anymore.

i'll explain why the sandbox doesn't matter, shortly. But first, consider the implications. Many mashup developers would immediately be able to get rid of their servers and run the entire mashup within a client...of course, enterprising organizations may store information on the servers for logging etc., so a server would still be necessary for those purposes. but, almost everything happening on the client would be able to take place without impacting the server (i.e. bandwidth, processing, DNS, even temporary storage and retrieval space). a majority of mashups that I have researched could easily accomplish the same application completely without a server!

to make this possible, we somehow have to get around that blasted sandbox so that our client can call to any domain on the internet, directly, without having to go through a bottleneck server. we can accomplish this thanks to what i call the social container, such as OpenSocial container, as one example. of course, we can use almost any social networking platform, which leaves facebook, windows live (think popfly), and many others social containers that provide context. However, for the sake of this article, I am going to focus on the OpenSocial container as a means to creating SMashups. but i will also provide url's to resources into other social containers. but back to the problem of the sandbox. fortunately for us, the opensocial container provides just the answer, in the form of:


gadgets.io.makeRequest(url, callback, params);

granted, this is actually a call to gadget.io, not opensocial, so why all the hubbub about social context and containers? We'll get into that later, but keep in mind the three S's of SMashup (scalability, serverless, and social). also keep in mind that opensocial directly supports google gadgets, but that other social containers also support gadgets and/or widgets. Of course, as I keep repeating, we'll be focusing on gadgets and opensocial in this article.

This changes everything, as you'll see. First, rather than programming the client to make a call to a mashup server, forcing the mashup server to hit the other web-services and return an integrated result, we can now program the client to gather (and even cache) resources on the client-side. from the mashup developer perspective, we completely eliminate the need for the mashp server. even for enterprises, this represents a huge opportunity, allowing companies to focus their servers on providing core-business web services, and offloading mashup development to the client--freeing resources. now, granted, one could argue that effectively, we haven't technically changed a thing...and that it's just we are using the social container to act as proxy, effectively just offloading the 'mashupping' onto their servers

...and that's just the point. never before in history was it possible to have such a large audience able to access and run ajax applications that could call out to many, many servers without having to worry about programming a 'dedicated' server. this technology now allows mashup developers to focus on developing mashup applications, skipping all the costs and hassle associated with even the most trivial IT operations...

so, we see that we have a powerful new way to communicate from the client. but gadgets.io.makeRequest() is not without its flaws. in this case, the weakness is it's very general nature. makerequest() is great for fetching data, but it doesn't give us, as developers, anything to hold onto, and no context when it returns, unlike XMLHttp which provided a nice little object that we could use, it gave us onError, onSuccess, and onTimeout callback handlers, and when we recieved a callback, the object was already loaded with response data. we could hold the object in memory and manipulate it's values...

...that's not quite the case with makerequest(). with makerequest, we tell it which url we want to fetch from (url), what function to call on return (callback), and which params to use. if we are issuing a post, for example, then we set the appropriate params to indicate a post, and also another parameter to carry the data (params is an array of parameters). here's how to use makerequest();

function example(){
var url = "http://www.Rhapsody.com/API/someservice?someparameter=1";
var _params={};
_params[gadgets.io.RequestParameters.CONTENT_TYPE]=gadgets.io.ContentType.DOM;
google.io.makeRequest(url, handleresults, params
}
function handleresults(response){

if(response.errors.lengh > 0){
alert(response.errors[0]);
} else{
alert(response.data.xml);
}
}


one more thing worth pointing out for consideration is that during development, it's often useful to be able to grab new information, even if you just accessed the information recently. this can be a problem in opensocial because the container usually caches requests for some period of time. therefore, even if you update something that you are trying to load, when you call for it you may only get what is in cache, not the data you updated. to overcome cache, we use a very old technique to append essentially a timestamp onto the end of the url request, effectively confusing cache into thinking this is a new request to a new url :) here's the code (which is available on the opensocial api tutorial page).


function makeCachedRequest(url, callback, params, refreshInterval) {
var ts = new Date().getTime(); var sep = "?";
if (refreshInterval && refreshInterval > 0) {
ts = Math.floor(ts / (refreshInterval * 1000));
} if (url.indexOf("?") > -1) {
sep = "&";
}
url = [ url, sep, "nocache=", ts ].join("");
gadgets.io.makeRequest(url, callback, params);
}

using this makeRequest wrapper, we can easily overcome cache. we can set the refresh interval for 1000 milliseconds if we'd like, or drop it to 1 ms. of course, we would want to only over-ride cache for development purposes, because most of the information would not change in a mashup query from minute to minute. for example, if we search against amazon api for books lists, if the user searches the same terms over and over, the cache automatically responds, saving the round-trip :)

Also, we'll create two more functions, just to help us get started...

function xmlhttpGet(url, responsehandler, refreshinterval){
var _params={};
_params[gadgets.io.RequestParameters.METHOD] = gadgets.io.MethodType.GET;
_params[gadgets.io.RequestParameters.CONTENT_TYPE]=gadgets.io.ContentType.DOM;
makeCachedRequest(url, responsehandler,_params,refreshinterval);
}
function xmlhttpPost(url, postdata, responsehandler, refreshinterval){
var _params={};
_params[gadgets.io.RequestParameters.METHOD] = gadgets.io.MethodType.POST;
_params[gadgets.io.RequestParameters.POST_DATA] = postdata;
_params[gadgets.io.RequestParameters.CONTENT_TYPE] = gadgets.io.ContentType.DOM;
makeCachedRequest(url, responsehandler, _params, refreshinterval);
}

most likely, we'll only be using GET. later in the series, i'll show you how to use the social container to store persistent data, freeing up the server from having to store most trival profile / savings / favorites data :)

however, there's one other design consideration to take into account when designing your code--namespaces...keep in mind that your code could be running in any number of contexts, much of which you will not have any control over. in this type of a platform, the chances of naming conflicts increases dramatically. for this reason, try not to use global variables, or even global functions. instead, we'll use encapsulation to create all of our functions within a namespace. from this point forward, i'll be using the nolyXMLHTTP namespace (you can use whatever you'd like, of course)...therefore, the code in this section would be included as follows:

function nolyXMLHTTP(){
//these are all 'static' functions, meaning they do not access any instance fields/data
function makeCachedRequest = new function(url, callback, params, refreshInterval) {
var ts = new Date().getTime();
var sep = "?";
if (refreshInterval && refreshInterval > 0) {
ts = Math.floor(ts / (refreshInterval * 1000));
} if (url.indexOf("?") > -1) {
sep = "&";
}
url = [ url, sep, "nocache=", ts ].join("");
gadgets.io.makeRequest(url, callback, params);
}
function getRequest = function(url, responsehandler, refreshinterval){
var _params={};
_params[gadgets.io.RequestParameters.METHOD] = gadgets.io.MethodType.GET;
_params[gadgets.io.RequestParameters.CONTENT_TYPE]=gadgets.io.ContentType.DOM;
nolyXMLHTTP.makeCachedRequest(url, responsehandler,_params,refreshinterval);
}
function postRequest(url, postdata, responsehandler, refreshinterval){
var _params={};
_params[gadgets.io.RequestParameters.METHOD] = gadgets.io.MethodType.POST;
_params[gadgets.io.RequestParameters.POST_DATA] = postdata;
_params[gadgets.io.RequestParameters.CONTENT_TYPE] = gadgets.io.ContentType.DOM;
nolyXMLHTTP.makeCachedRequest(url, responsehandler, _params, refreshinterval);
}
}

doing this allows us to write code such as the following:

function main(){
nolyXMLHTTP.getRequest(someURL, callbackhandler, 1);
//or
nolyXMLHTTP.postRequest(someURL, someData, callbackhandler, 1);
}
function callbackhandler(response){

if(response.errors.lengh > 0){
alert(response.errors[0]);
}
else{
alert(response.data.xml);
}
}

it may not seem like we have made much progress. after all, the only difference from the first code that I wrote (use .makerequest() directly) and this last code (using our namespace), is that this code calls through our shallow wrapper, providing us with separate get and post methods. however, there is a method to this madness, which we'll go over in part iii. you'll find part iii really rewarding, as we'll start toward building our own XMLHTTPRequest object that gives us most of the core functionality we expect from the standard XMLHTTPRequest object we all know and love. But it takes a bit of voodoo to make that happen, which i hope everyone will appreciate :)

until part iii...

...happy coding,
nolybab praetorius
END OF PART II

SMashups - PART I

SMashups - PART I
SMashups: Scalable, Server-less, Social Mashups for Web 2.0
Introduction
------------
Maturing is the age of Web 2.0.
With it, comes mashups 2.0.

why the term SMashups? in this case, the S actually has three meanings, Social, Scalable, and Serverless। that's right, it's about developing a new web service that resides direclty on the client, through a social network, without needing a mashup server. almost any application / mashup would benefit from considering this approach. and i anticipate that in the very near future, there will be a FLOOD of new development along these lines, as mashup developers begin to realize how few resources it takes to build these SMashups.

i know i'm not the only one working on these systems, but i'm hoping to at least join the growing chorus of advocates with this article, and to contribute to the growing body of information to support ongoing development efforts in this new digital frontier (seems every decade we have a new digital frontier)। so to many of you, the methods i am suggesting are probably already quite familiar. you probably even have some better ways of doing things programmatically, or tools that you already use. for many others, however, this is something entirely new. regardless, i'm confident that we'll be seeing a mass exodus from Mashups to SMashups, so that last half of this year will be truly interesting to watch unfold as the industry giants align for the full impact of Web 2.0.

Because there is so much information to present, I decided to release this article in six parts:

part i) introduction
part ii) getting out of the sandbox
part iii) restoring context : building a simple opensocial xmlhttprequest object
part iv) the proper use of servers (cloud computing and the mashup developer)
part v) more client-side goodies (container cache, application cache, and durable remote storage)
part vi) what does it all mean? putting it all together

this is part i, the introduction, where i just try to layout what will communicate। i will publish each part within 1--2 weeks of each other, and quicker if possible. in fact, i'm really hoping that others will contribute great ideas and help me get the dialogue going so as to adjust the writing of future sections based upon the feedback from the community regarding previous sections. by the last section, i hope to put all the pieces together and demonstrate not only what cool technologies we have available, right under our noses, but also how to profit from this knowledge and to build a better internet for the future while we profit.

in part ii, i'll discuss how we get out of the sandbox, and how gadget technologies and social networks have provided a much-needed resource that allows mashup developers to escape the sandbox, liberating us to create applications that run entirely on the client, eliminating the need for a mashup server (i।e. bottleneck).

in part iii, we'll investigate an issue specific to makerequest, regarding context। it's not a major developmental breakthrough, technically speaking, but there is one special trick we HAVE to be aware of when using makerequest, something we never had to worry about with xmlhttprequests--context. we'll build our own very simple xmlhttprequest object, based upon makerequest, to explain how we can handle make-request, and how to use this raw power to our advantage.

in part iv, i'll try to layout what i believe to be the best way to use servers। granted, we don't necessarily 'need' servers to support SMashups, but we do find through analysis that having a server can provide simple benefits that allow us to build new models for fun and profit. we'll talk about how to get services started for free, issues related to scalability, maintainability, and all the other 'abilities' i have time to address.

in part v), we'll look at technologies such as persistent storage in opensocial, google gears, and developing our own caching systems to make applications that are much more responsive, and even distributing info storage among many clients for batch submission and processing in the cloud।

finally, by part vi, we'll work to integrate all of the pieces together to discuss new opportunities in the world--both as they exist today, and where they will be in the future.
Welcome, the SMashup!

now there's one last thing to consider। when i google 'smashup' or 'smashups' there are ABSOLUTELY NO google adds attached to those words :) -- ok, i just checked, and amazon does have one add up for 'smashups', related to a music band. also, in my search i found some instances where newbie developers had improperly referred to mashups as smashups, but overall, the concepts of smashups haven't yet taken root in mainstream.

therefore, i'm publishing this article in the attempt to improve the dialogue concerning these new technologies, and hopefully to learn more about what others are doing in this area.
happy coding,

nolybab praetorius
END OF PART I

see part II here