Friday, July 10, 2009

jQuery DatePicker Gotcha - Don't call it twice

I spent several hours tracking down an issue with the jQuery date picker control today and discovered a bit of a gotcha. I was displaying a simple table, with dynamically created editable rows that contained date controls. When the table was first created, the date picker in each row worked fine. But after hiding and updating the table the date pickers did not work.

I thought initially that it was an issue with z-indexing and that the date picker was being displayed, just behind the dialog containing the table. Working on this assumption, I spent a while fighting with z-index settings in the jquery-ui css file as suggested in various online postings.

After looking at my code though, I realized that I was running this code after updating my table to enable the date picker on new rows:


   1:  this._editor().find('.DateEntry').datepicker(opt);


The problem was that I wasn't deleting all the rows in my table and recreating, I was only updating a subset of rows and adding a few new ones. Thus I was calling the date picker function on an element that had already been enabled as a date picker. This prevented any elements from being made into date pickers.

Moral of the story, never call datepicker() twice on the same element. Bad mojo.

Friday, June 5, 2009

JQuery Callback Context

Most of my web applications using AJAX have built around a "client control" model, similar to the user control model in ASP.NET, in which a client side JavaScript class encapsulates all the functionality related to an editor, list or other interface component. This means that my callbacks (both AJAX and event handlers) need to execute in the context of the containing JavaScript object, not the object executing the function to return the data.

After played around with a couple of methods I settles on using the jQuery.context plug-in. It's a very nifty bit of code and relies on using a closure to capture the initial state. While this works great, I often find myself needing to pass some state information back along with the callback, typically when I am dealing with arrays of items (eg a column of buttons in a table that trigger click events).

I solved my problem by modifying the context plug-in to take an additional parameter that is stored inside the closure and appended to the parameter list when the callback is executed. My revised code looks like this:



   1:  // $.context

   2:  jQuery.extend(

   3:  {

   4:    context: function (context)

   5:    {

   6:      var co = 

   7:      {

   8:        callback: function (method, state)

   9:        {

  10:          if (typeof method == 'string') method = context[method];

  11:          var cb = function () 

  12:          { 

  13:              var args = new Array();

  14:              for(var i = 0; i < arguments.length; i++) {

  15:                  args[i] = arguments[i];

  16:              };

  17:              args[arguments.length] = state;

  18:              method.apply(context, args); 

  19:          }

  20:          return cb;

  21:        }

  22:      };

  23:      return co;

  24:    }

  25:  }); 

I'm using the same apply function as in the original code, but I'm copying over the argument list and appending the state object that was passed in along with the context. Copying over the arguments array is important, just appending the state to the arguments array causes funky results.

I initially planned on allowing an array of arguments, but since its so easy to create a JSON style object on the fly as your state, it seemed more confusing than useful.

Monday, June 1, 2009

Script Bloat

The Problem

Rich client web applications tend use a lot of JavaScript. Not only do they rely on extensive libraries to support client side functionality, but in my opinion good modular code design tends to lead to lots of JavaScript files (especially since the Visual Studio JavaScript editor lacks code folding or regions and thus makes working with large files tiresome to say the least). Typically I find it easiest to place each client side class in a single file, much like we do with our server side code. Thus, for a typical page, consisting of a list, a filter, and one or two editor controls I might have the following script includes:
  • JSON library
  • jQuery / jQuery UI
  • Five or six standard jQuery plug-ins (blockui, timer, dimensions, hoverintent, flot, bgiframe, etc) to provide extended functionality beyond that of the base libraries)
  • Our own library of base controls that the domain specific controls inherit from to obtain our form, list, and filter functionality
  • A control for each editor and list on the page, with these controls often having sub controls
Looking over a sample page from a recent application, I found 18 scripts included! This can cause significant performance issue for client, especially those with more limited bandwidth, for two reasons:
  • Total script size: We are downloading source code, which can be quite large. Source code contains white space and comments, and is formatted to aid maintainability, not to produce svelte downloads.
  • Number of Requests: Since there may be dependencies between the scripts (almost everything we use has some dependency on jQuery for instance), browsers will only download and evaluate one script at a time. Given the overhead in fetching and evaluating each script, this time can add up.
There are several established techniques to address these issues:
  • Compression: Most moderns browsers support gzip compression. Using gzip can reduce the size of the scripts significantly.
  • Concatenation: By combining all of your scripts into a single file, the client only needs to make one request. Order is still important, so you need to concatenate the scripts in the correct order of their dependencies.
  • Remove whitespace: Tools like JSMIN and packer can remove whitespace and comments, greatly reducing the download size.
  • Caching: By enabling caching for libraries and other scripts that are unlikely to change frequently, the browser will only download the script file once.

While using these techniques can make an enormous difference to the end user, they can make life very hard on the developer. Compression and caching require you to configure and maintain settings in IIS which add additional deployment and maintenance concerns. And combining / minifying your scripts makes them very difficult to maintain. Imagine having all your developers work on a single, uncommented, non white space JavaScript file!

The Solution

The solution is to apply these techniques at run time, rather than at development or build time. There are a variety of implementations for various frameworks, but none quite suited my needs. Specifically, I wanted something that would:
  • Work with minimal IIS configuration and work in both IIS6 and IIS7
  • Allow for a "debug" mode that delivered readable scripts to aid in debugging
  • Require minimal configuration and ideally use a centralized configuration file
  • Be flexible enough to allow for varying cache intervals and to compensate for the fact that some library scripts I've used seem to dislike being minified
  • Work well with .NET, but not require the client side AJAX.NET libraries (no ScriptManager base implementations)
  • Be simple

Overview

My solution is based on a custom ASP.NET HttpHandler that allows web pages to include requests for scripts groups instead of just single scripts. The handler dynamically concatenates and minifies the script as well as emitting caching and gzip compression headers as appropriate.

A request for a script group containing all of my base jQuery files would look like:


   1:  <script src="/ScriptOptimizer.pragmatix?groups=jQuery" type="text/javascript"></script>

Multiple scripts can be combined into a single request by combining in a comma separated list:


   1:      <script src="/ScriptOptimizer.pragmatix?groups=jQuery,PragmatixBase" type="text/javascript"></script>


I define my script group mappings in the web.config inside a custom configuration section. I've seen a few folks online try to auto discover the scripts referenced by a page to avoid a configuration file, but I felt the overhead in terms of code wasn't worth the work.


   1:    <ScriptOptimizer>

   2:      <ScriptGroup Name="jQuery" Compress="true" AllowCache="true" CacheLengthInDays="7">

   3:        <Script Path="/scripts/jquery-1.3.2.min.js" Enabled="true" Minify="false"/>

   4:        <Script Path="/scripts/jquery-ui-1.7.1.custom.min.js" Enabled="true" Minify="false"/>

   5:  

   6:  

   7:  

   8:      </ScriptGroup>

   9:      <ScriptGroup Name="PragmatixBase" Compress="true" AllowCache="false" CacheLengthInDays="1">

  10:        <Script Path="/scripts/V2/PragmatixControl.js" Enabled="true" Minify="true"/>

  11:        <Script Path="/scripts/V2/PragmatixModalEditor.js" Enabled="true" Minify="true"/>

  12:      </ScriptGroup>

  13:    </ScriptOptimizer>



As you can see the caching and compression options apply to an entire script group while enabling and minifying can be controlled on a per script level. If you combine groups, the lowest caching interval applies to all included scripts.

The Configuration

In order to keep the configuration info inside the web.config and avoid yet another config file I implemented my own configuration section. I just found out that my previous way of doing this, using the IConfigurationSectionHandler interface is deprecated, but I chose to ignore this for now instead of using the newer ConfigurationSection () class. This allowed me to stick with the same XmlSerializer loading and saving of procedures that I use for other stuff. Plus I didn’t feel like implementing a custom class for all my collections (although using generics is a pretty solid work around for that, see: http://utahdnug.org/blogs/josh/archive/2007/08/21/generic-configurationelementcollection.aspx)

The configuration section is pretty simple. The configuration data is loaded up as an object by deserializing the contents of the config section.


   1:  public object Create(object parent, object configContext, XmlNode section)

   2:  {

   3:      XmlElement root = (XmlElement)section;

   4:      XmlSerializer s = new XmlSerializer(typeof(ScriptOptimizerConfig));

   5:      return (ScriptOptimizerConfig)s.Deserialize(new XmlNodeReader(section));

   6:  }




The ScriptOptimizerConfig class contains a collection of script groups, which in turn contain a collection of script definitions. Using XML markup attributes like this:


   1:  [XmlRoot(ElementName = "ScriptOptimizer")]

   2:  public class ScriptOptimizerConfig

   3:  {

   4:   

   5:          #region Properties

   6:          [XmlElement("ScriptGroup")]

   7:          public List<ScriptOptimizerScriptGroupConfig> ScriptGroups

   8:          {

   9:              get { return m_ScriptGroups; }

  10:              set { m_ScriptGroups = value; }

  11:          }

  12:          private List<ScriptOptimizerScriptGroupConfig> m_ScriptGroups;

  13:      

  14:          #endregion




I really like using the .NET serialization, its possible to whip up a quick and nicely structured configuration file in just a few minutes. Plus, if the project scope expands, I can reuse the configuration objects with NHibernate persistence without significant code changes.


The Code

The actual work of script optimization is done inside the ScriptOptimizerHttpHandler class. The entry point is the ProcessRequest method.


   1:  public void ProcessRequest(HttpContext context)

   2:  {

   3:      // load our configuration section

   4:      ScriptOptimizerConfig Config = (ScriptOptimizerConfig)ConfigurationManager.GetSection("ScriptOptimizer");

   5:      string[] groups = context.Request.QueryString["groups"].Split(new char[] { ',', ';', ':' });

   6:   

   7:      // determine the combined settings when multiple groups are requested

   8:      ResultantScriptGroupSetting CombinedGroupSettings = Config.GetCachingSettings(groups);

   9:   

  10:   

  11:   

  12:      // set up GZIP compression if configured for such and the client allows it

  13:      if (CombinedGroupSettings.Compress.Value && IsGZipSupported(context))

  14:      {

  15:        context.Response.AppendHeader("Content-Encoding", "gzip");

  16:        ICSharpCode.SharpZipLib.GZip.GZipOutputStream OutputGZIPStream;

  17:        OutputGZIPStream = new ICSharpCode.SharpZipLib.GZip.GZipOutputStream(context.Response.Filter);

  18:        OutputGZIPStream.SetLevel(ICSharpCode.SharpZipLib.Zip.Compression.Deflater.BEST_COMPRESSION);

  19:        context.Response.Filter = OutputGZIPStream;

  20:      }

  21:   

  22:      // ready the response for writing out the scripts

  23:      context.Response.Clear();

  24:      context.Response.ContentType = "application/x-javascript";

  25:   

  26:      // write caching headers

  27:      // if we are combining groups, we need to generate a combined set of caching headers

  28:      // that makes sense. In this implementation, the lowest caching interval wins

  29:   

  30:      if (CombinedGroupSettings.AllowCache.Value)

  31:      {

  32:          context.Response.Cache.SetCacheability(HttpCacheability.Public);

  33:          context.Response.Cache.SetExpires(DateTime.Now.AddDays(CombinedGroupSettings.CacheLengthInDays.Value));

  34:   

  35:      }

  36:      else

  37:      {

  38:          context.Response.Cache.SetCacheability(HttpCacheability.NoCache);

  39:      }

  40:   

  41:      // append the scripts 

  42:      AppendScripts(context, Config, groups, CombinedGroupSettings);

  43:   

  44:  }




First I load the configuration section, parse out my requested groups from the query string, and calculate some of the group-level settings if more than one group is requested. Next we emit our compression and caching headers. Finally we call the AppendScripts function to return each of the requested scripts.


   1:  public void AppendScripts(System.Web.HttpContext context, ScriptOptimizerConfig Config, string[] groups, ResultantScriptGroupSetting CombinedGroupSettings)

   2:  {

   3:   

   4:      foreach (string groupName in groups)

   5:      {

   6:          ScriptOptimizerScriptGroupConfig groupconfig = Config.GetGroup(groupName);

   7:          // loop over each script. minify is necessary and append to the output stream

   8:          foreach (ScriptOptimizerScriptConfig scriptconfig in groupconfig.Scripts)

   9:          {

  10:              if (scriptconfig.Enabled)

  11:              {

  12:                  string FullScriptPath = context.Server.MapPath(scriptconfig.Path);

  13:                  

  14:                  // we can choose to exclude scripts from the minifications process

  15:                  // some libraries we don't need to debug and some scritps react poorly

  16:                  if (scriptconfig.Minify)

  17:                  {

  18:                      MemoryStream FileJavaScript = new MemoryStream(Encoding.ASCII.GetBytes(File.ReadAllText(FullScriptPath)));

  19:                      JavaScriptMinifier min = new JavaScriptMinifier();

  20:                      min.Minify(FileJavaScript, context.Response.OutputStream);

  21:                  }

  22:                  else

  23:                  {

  24:                      context.Response.WriteFile(FullScriptPath);

  25:                  }

  26:   

  27:                  // this helps correct for scripts that aren't properly terminated

  28:                  // this is fine when they are individual files, but causes issues when concantenated

  29:                  context.Response.Write(";;"); 

  30:              }

  31:          }

  32:      }

  33:   

  34:   

  35:  }



I am using a modified version of the JSMIN C# class provided by Douglas Crockford. My only modification was to rework some of the input and output functions to make it easier to interface with the library.



Installation

First, you need to add the configuration section to your web.config :


   1:  <section name="ScriptOptimizer" type="Pragmatix.ScriptOptimizer.ScriptOptimizerConfigurationHandler, CDXLibrary"/>



Next register the HTTPHandler under the section like so:


   1:        <add verb="*" path="ScriptOptimizer.pragmatix" type="Pragmatix.ScriptOptimizer.ScriptOptimizerHttpHandler,CDXLibrary "/>



You can choose any path you like, my choice was pretty arbitrary. One thing to remember is that, while this will work fine in Visual Studio (using the built in Cassini debugger), it will fail in IIS 6 unless you register whatever path you chose to be handled by ASP.NET isapi dll. IIS 7 doesn't have this problem if you use integrated mode, since all extensions are routed through .NET.





Once this is configured, define your script groups, add the references in your pages, and you should be good to go.

The Results

The firebug windows below show the results of adding the script optimizer to a page. The number of blocking script requests is greatly reduced, shortening the "JavaScript load ladder" and reducing page load time by almost two thirds. (NB This is running locally in a test server with Firebug and Fiddler running. 9.91 seconds is not a good production page time!)

Before:


After:


Future Plans

This same technique could be applied to combine and compress css and possible html fragments if you load html dynamically in the client side. Eventually I plan to add a server control to replace the script reference tag, so that in debug mode I can return multiple script references for easier searching through code in Firebug. I would also like to do some server side caching of the resultant output, thus reducing the overhead of running JSMIN and reading all the file from disk.

The Files


If you want to try this out for yourself, here is the source code.

Friday, May 29, 2009

XML Serialization With Subclasses

The .NET XML serialization libraries are a huge time saver for quickly implementing configuration files and small local data stores. All it takes is a few attribute markup tags to allow for quick saving and loading of an object graph to disk.

One of the shortcomings of this approach is how .NET handles the serialization of subclasses. To illustrate this, lets take a quick look at a configuration file I developed for an data processing service I was working on recently. The service allowed the user to configure jobs that ran on a scheduled basis, performed configurable data searches against a remote database, transformed the data, and than ran the data through one or more output plug-ins. These plug-ins allowed perform a variety of different actions on the retrieved data. Each output plug-in needed different configuration object, since it did different things. The service architecture allowed us to write new plug-ins as needed to perform custom integration tasks for specific clients without needed to release a new version of the app.

The architecture of the output system is pretty simple: configuration classes all implement the same IOutputPluginConfig interface and we used a simple factory class to instantiate the corresponding IOutputPlugin classes from their saved config. This means that each persistent job configuration class had a property like this:



   1:  public SerializableList<IOutputConfig> Outputs

   2:  {

   3:       get { return m_Outputs; }

   4:       set { m_Outputs = value; }

   5:  }

   6:  private SerializableList<IOutputConfig> m_Outputs;



This list might contain several different output configuration types.


Now, .NET will serialize this list of IOutputConfig objects to disk just fine, but on loading the configuration it will fail, since it doesn't know what classes to create for each saved XML output configuration element.

There is a built-in mechanism to resolve this problem; using the XmlArrayItemAttribute you can specified all the subclasses that should be deserialized.



   1:  [XmlArrayItem(typeof(EmailOutputConfig)),

   2:  XmlArrayItem(typeof(DiskOutputConfig)),

   3:  XmlArrayItem(typeof(SharepointOutputConfig)),]

   4:  public List<IOutputConfig> Outputs;



This didn't really suit my need for two reasons:
  1. Its messy and breaks encapsulation. We are leaking knowledge of the subclasses upwards, which is bad design. New implementations of outputs would have to modify the attribute in the library classes.
  2. Since we are implementing new functionality via plugin-ins (which are loaded dynamically), we don't know the full list of possible subclasses at compile time and thus can't list them in the attribute, even if we were willing to hold our noses and do so.
We could also work around this by having our subclasses implement the IXmlSerializable interface, but this would require mucking about with XML readers and writers for every configuration type we implemented, which takes time and thus negates a lot of the benefits of this approach.

The approach I used was to implement two custom classes, ISerializableList and ISerializableDictionary. These classes implement IXmlSerializable to wrap each of the subclasses in an tab. This tag records the type of the saved object so we know what type to feed an XmlSerialzer when restoring the data.



   1:  <Outputs>

   2:  <Item type="APP.Output.DiskOutputConfig, APPDAC">

   3:    <Disk>

   4:      <BasePath>c:\test\exrs\two\</BasePath>

   5:      <PathPattern>{DATESTAMP}_{TIMESTAMP}</PathPattern>

   6:      <OverwriteFile>true</OverwriteFile>

   7:    </Disk>

   8:  </Item>

   9:  <Item type="APP.Output.DiskOutputConfig, APPDAC">

  10:    <Email>

  11:      <SMTPServer>127.0.0.1</SMTPServer>

  12:      <Recipient>a@test.com</Recipient>

  13:      <Recipient>b@test.com</Recipient>

  14:      <FilePattern>{DATESTAMP}_{TIMESTAMP}</FilePattern>

  15:    </Email>

  16:  </Item>

  17:  </Outputs>



One of the points to note is that I am stripping the assembly version info from the saved type name. This was to avoid version changes in the asembly causing load errors. For major schema changes I would need to implement another set of classes anyway, so as to be able to have both loaded at once for translation.

Here is the full code for both classes and well as a link to a project you can use directly.



   1:  [Serializable]

   2:  public class SerializableList<TValue>

   3:  : List<TValue>, IXmlSerializable

   4:  {

   5:      #region IXmlSerializable Members

   6:   

   7:      public XmlSchema GetSchema()

   8:      {

   9:          return null;

  10:      }

  11:   

  12:      public void ReadXml(XmlReader reader)

  13:      {

  14:          bool wasEmpty = reader.IsEmptyElement;

  15:          reader.Read();

  16:          if (wasEmpty)

  17:          return;

  18:   

  19:          while (reader.NodeType != XmlNodeType.EndElement)

  20:          {

  21:          string StateTypeDescriptor = reader.GetAttribute("type");

  22:          Type StateType = Type.GetType(StateTypeDescriptor);

  23:   

  24:          reader.ReadStartElement();

  25:          XmlSerializer valueSerializer = new XmlSerializer(StateType);

  26:          this.Add((TValue)valueSerializer.Deserialize(reader));

  27:   

  28:          reader.ReadEndElement();

  29:          reader.MoveToContent();

  30:          }

  31:          reader.ReadEndElement();

  32:      }

  33:   

  34:      public void WriteXml(XmlWriter writer)

  35:      {

  36:          foreach (TValue item in this)

  37:          {

  38:          Type ValueType = item.GetType();

  39:          XmlSerializer valueSerializer = new XmlSerializer(ValueType);

  40:          string SubElementName = "Item";

  41:   

  42:          writer.WriteStartElement(SubElementName);

  43:   

  44:          writer.WriteStartAttribute("type");

  45:          writer.WriteString(Serialization.GetTypeName(ValueType));

  46:          writer.WriteEndAttribute();

  47:   

  48:          valueSerializer.Serialize(writer, item);

  49:   

  50:          writer.WriteEndElement();

  51:          }

  52:      }

  53:   

  54:      #endregion

  55:   

  56:  }





   1:  [Serializable]

   2:  public class SerializableDictionary<TKey, TValue>

   3:  : Dictionary<TKey, TValue>, IXmlSerializable

   4:  {

   5:      #region IXmlSerializable Members

   6:      public XmlSchema GetSchema()

   7:      {

   8:          return null;

   9:      }

  10:   

  11:      public void ReadXml(XmlReader reader)

  12:      {

  13:          XmlSerializer keySerializer = new XmlSerializer(typeof(TKey));

  14:   

  15:          bool wasEmpty = reader.IsEmptyElement;

  16:          reader.Read();

  17:          if (wasEmpty)

  18:          return;

  19:   

  20:          while (reader.NodeType != System.Xml.XmlNodeType.EndElement)

  21:          {

  22:          string StateTypeDescriptor = reader.GetAttribute("type");

  23:          Type StateType = Type.GetType(StateTypeDescriptor);

  24:          XmlSerializer valueSerializer = new XmlSerializer(StateType);

  25:   

  26:          reader.ReadToFollowing("key");

  27:          reader.ReadStartElement("key");

  28:          TKey key = (TKey)keySerializer.Deserialize(reader);

  29:          reader.ReadEndElement();

  30:   

  31:          reader.ReadStartElement("value");

  32:          TValue value = (TValue)valueSerializer.Deserialize(reader);

  33:          reader.ReadEndElement();

  34:   

  35:          this.Add(key, value);

  36:          reader.ReadEndElement();

  37:          }

  38:   

  39:          reader.ReadEndElement();

  40:      }

  41:   

  42:      public void WriteXml(System.Xml.XmlWriter writer)

  43:      {

  44:          // the keys can't be subclassed,only the values can

  45:          XmlSerializer keySerializer = new XmlSerializer(typeof(TKey));

  46:   

  47:          foreach (TKey key in this.Keys)

  48:          {

  49:          TValue value = this[key];

  50:          Type ValueType = this[key].GetType();

  51:          XmlSerializer valueSerializer = new XmlSerializer(ValueType);

  52:   

  53:          writer.WriteStartElement("item");

  54:          writer.WriteStartAttribute("type");

  55:          writer.WriteString(Serialization.GetTypeName(ValueType));

  56:          writer.WriteEndAttribute();

  57:   

  58:          // serialize the key

  59:          writer.WriteStartElement("key");

  60:          keySerializer.Serialize(writer, key);

  61:          writer.WriteEndElement();

  62:          writer.WriteStartElement("value");

  63:          valueSerializer.Serialize(writer, value);

  64:          writer.WriteEndElement();

  65:   

  66:          writer.WriteEndElement();

  67:          }

  68:      }

  69:      #endregion

  70:  }

  71:   





   1:  namespace Pragmatix.Serialization

   2:  {

   3:      public class Serialization

   4:      {

   5:   

   6:          public static string GetTypeName(Type t)

   7:          {

   8:              string ClassName = t.FullName;

   9:              string SimpleAssembly = t.Assembly.FullName.Split(new string[] {","}, StringSplitOptions.RemoveEmptyEntries)[0];

  10:              return ClassName + ", " + SimpleAssembly;

  11:          }

  12:      }

  13:  }



Source code

Javascript Inheritance

When I first started embarking on larger scale JavaScript projects (not just the standard event handler snippets) it took me a while to adjust to the differing inheritance style since i was used to class based programming. Initially I flirted with the Microsoft AJAX.NET approach, which attempts to bolt on a class based, strongly typed inheritance model onto JavaScript (even to the point of mimicking enumerations and interfaces), but eventually I realized that it was best to use the language the way it was intended to be used.

After a bit of searching among the variety of approaches out there, I ended up using the parasitic inheritance model proposed by Douglas Crockford. The essence of this approach is having the constructor for a subclass instantiate an instance of its superclass, modify it by adding methods, and then returning the modified object. A simple inheritance would look like this:




   1:  car= function(manufacturer) {

   2:      // class we inherit from

   3:      var me = new vehicle(manufacturer);

   4:      me.honkHorn = new function() {

   5:          // do stuff

   6:      };

   7:      return me;

   8:  }

   9:   

  10:  var myBlueVolvo = new car= ("Volvo");



This seems pretty natural and gives most of what I was looking for (code reuse and some variable hiding) without torturing the language too badly.

I do occasionally need to call methods in the superclass from the subclass though. Douglas implements a relatively complex "sugar" that automatically provides reference to a "uber" object that mimics the "base" object in C#, but I was worried that this was a lot of extra code for something I really only do with a handful of functions. Complexity is bad.

Instead, when I need to reference an overridden method in the superclass, I took advantage of JavaScript closures to provide a simple way to call a superclass method. Here is an example:



   1:  var super = me.honkHorn; 

   2:  me.honkHorn = new function() {

   3:      // call the method on the superclass

   4:      super.honkHorn();

   5:   

   6:      // now do stuff in the subclass

   7:  };



By saving the superclass function in the super var, it gets captured and is still around to be called, even though it is no longer references by the new object as a public function.

Thursday, May 28, 2009

Flot Rocks

Overview

Adding graphs to web pages used to require building up images server side or using a flash applet. Server side images had bandwidth concerns and limited interactivity while flash applets required a another skill set to develop.

With the advent of client side JavaScript libraries like jQuery, several graphing toolkits have been written to allow client side graph generation and interaction. The one we use at work is called flot . It is open source, easy to develop with, provides excellent cross browser functionality, is well integrated with our primary JavaScript library (jQuery) and generates very attractive graphs.


Figure 1 - Sample Flot Chart

Flot provides a host of useful features, including:

  • Intelligent scaling of axis based on series data
  • Zooming and drill down with clickable data points
  • Automatically legend creation
  • Complete control over formatting, with sensible defaults
  • Simple binding to JSON data for interacting with web services
  • Interpolation of data. Data sets do not need to have a uniform collection of x-axis values.
  • A variety of graph types, including time series, scatter plot, bar, and stacked series. Different graph types can be combined into a single graph


Figure 2 - Combining Chart Types

Walkthrough of Simple Graph



Our typical usage for flot is to calculate the data set server side and return a set of data series to load into flot. Flot expects the data for a series to be a array of point values, each point value being a simple two element array. This is then contained inside another object, with additional properties describing the series. We have a collection of classes we use to load the data that serialize well into JSON and that can be fed directly into our flot graph using the flot series format. A simple data series object is show below:





   1:  public class SimpleDataSeries

   2:  {

   3:      #region Properties

   4:      public string label

   5:      {

   6:          get { return m_label; }

   7:          set { m_label = value; }

   8:      }

   9:   

  10:      private string m_label;

  11:   

  12:      public List<double[]> data

  13:      {

  14:      get { return m_data; }

  15:      set { m_data = value; }

  16:      }

  17:   

  18:      private List<double[]> m_data;

  19:      #endregion

  20:   

  21:      public SimpleDataSeries(string label)

  22:      {

  23:            m_label = label;

  24:            m_data = new List<double[]>();

  25:      }

  26:   

  27:      public void AddDataPoint(double x, double y)

  28:      {

  29:            data.Add(new double[] { x, y });

  30:      }

  31:  }



Flot has automatic capabilities for displaying and formatting time series data. For time series charts it expects the x values to be in JavaScript timestamps, which are the number of seconds since 1/1/1970. To handle time series we use another server side class:




   1:  public class TimeDataSeries : SimpleDataSeries

   2:  {

   3:      #region Properties

   4:   

   5:      private DateTime UnixStart

   6:      {

   7:      get { return m_UnixStart; }

   8:      set { m_UnixStart = value; }

   9:      }

  10:      private DateTime m_UnixStart;

  11:   

  12:      #endregion

  13:      public TimeDataSeries(string Label)

  14:      :base(Label)

  15:      {

  16:      UnixStart = new DateTime(1970, 1, 1);

  17:      }

  18:   

  19:      public void AddDataPoint(DateTime x, double y)

  20:      {

  21:      double JavascriptTimestamp = x.Subtract(UnixStart).TotalMilliseconds;

  22:      AddDataPoint(JavascriptTimestamp, y);

  23:      }

  24:  }




In order to get this data to the client, we use a web service that returns a listing of these data series in the response object. Here is a web service call that returns a set of data series describing the storage utilization of several servers on our network, both for each machine individually and as a set of summary series:





   1:  [WebMethod(true)]

   2:  public IUpdateResponse Update(IUpdateRequest request)

   3:  {

   4:      // get the current user

   5:      User CurrentUser = User.GetUser(HttpContext.Current.User.Identity.Name);

   6:      // create a response and initialize it with base reponse data

   7:      UpdateResponse response = new UpdateResponse();

   8:      // add our main response properties

   9:      VaultStorageCalculator storagecalc = new VaultStorageCalculator(request.VaultId, request.MachineIds, request.StartDate, request.EndDate, 45);

  10:   

  11:      GraphData gd = storagecalc.GetGraphData();

  12:      response.DataSeries = gd.Machines;

  13:      response.SummarySeries = gd.Total;

  14:      // return the response

  15:      return response;

  16:  }

  17:   

  18:  interface IUpdateResponse

  19:  {

  20:      List<SimpleDataSeries> MachineDataSeries { get; set; }

  21:      List<SimpleDataSeries> SummarySeries { get; set; }

  22:  }



We take advantage of the excellent serialization provided by the Microsoft AJX.NET server side libraries to serialize this to the client. Using this method, we keep most of the heavy lifting of calculating the series data on the server side, where we can use C# code that has direct access to our data model.



Once we have fetched this data on the client, building a simple chart on the client side can be done with only a few lines of code, like so:




   1:  // plot the individual machines

   2:  var options = {

   3:  lines: { show: true },

   4:  points: { show: true }

   5:  };

   6:  $.plot(this._element.find("div[type=chartArea]"), response.DataSeries, options);




By adding items to the options object, we can get custom control over display elements like the formatting of the axis labels. By adding the code below to the options object, I can format a y-axis to display the server storage utilization in logical units of megabytes and gigabytes.



   1:  options.yaxis:

   2:  {

   3:      mode: "time",

   4:      tickFormatter: function suffixFormatter(val, axis) {

   5:      if (val > 1000000000)

   6:      return (val / 1000000000).toFixed(axis.tickDecimals) + " GB";

   7:      else if (val > 1000000)

   8:      return (val / 1000000).toFixed(axis.tickDecimals) + " MB";

   9:      else if (val > 1000)

  10:      return (val / 1000).toFixed(axis.tickDecimals) + " kB";

  11:      else

  12:      return val.toFixed(axis.tickDecimals) + " B";

  13:      }

  14:  };







We can tie event handlers to plot events, capturing a variety of events including:

  • clicking on data series elements
  • clicking on legend items
  • hovering over plot elements
  • selecting regions of the plot

For our server storage plot, we can bind an event to the chart to display a tooltip style popup when a user hovers over a data point. This allows us to show the actual storage values without cluttering up the main graph. Here is the code to bind to the hover event.





   1:  $(this._element.find("div[type=chartArea_Total]")).bind("plothover", $.context(this).callback('_showTooltip'));




Another neat feature is the in depth control over the legend. By adding a legend object to our options definition, we can add arbitrary HTML markup to the legend. Here I use the legend markup to allow users to click on a machine in the legend to view another page with detailed machine data.




   1:  options.legend =

   2:  {

   3:      show: true,

   4:      container: this._element.find("div[type=legendArea]"),

   5:      noColumns: 2,

   6:      labelFormatter: function(label, series) {

   7:      // series is the series object for the label

   8:      return ' + label + ' machine=' + series + ' >' + label + '';

   9:      }

  10:  }





Using these relatively simple technique, you can build highly interactive graphs using only JavaScript and .NET web services.