New Collaborative Research Project Between NetApp and EECS@WSU

NetApp and Software Engineering Research and Services (SERS) Laboratory at WSU announced a new research partnership on developing automated goal oriented user interfaces for the next generation of storage management software.  This effort is also anticipated to recognize new e-commerce opportunities.

This nexus is formed on the respective strengths of NetApp’s natural language and user experience technologies in the computer-system storage domain (industry relevance and market opportunities), and SERS Lab’s research contributions in the area of construction and evolution of large-scale software systems, including software analytics (academic research).   One of the projected outcomes is a technology to empower IT professionals to specify a system management goal in natural language and the automated solutions will determine the necessary configuration parameters and their values that optimize the desired quality metric (e.g., capacity and performance).  This paradigm offers a radical shift from the ubiquitous form-based interfaces in the targeted market sphere.

See a Demo of the latest research

Overall, the effort will include multiple phases with the foundational work already underway on data analytics of configuration parameters and building an actionable model.   The concluding phase is directed toward tech transfer and to recognize upselling revenue opportunities.  NetApp is currently funding this project, which involves two PIs and a Ph.D. student, and is expected to include additional contributors. 
The long-term goal is to formulate an academic-industry alliance that will integrate and sustain a spectrum of activities, ranging from scientific research to industry practice to classroom education.

Mr. Chris ONeil and Dr. Huzefa Kagdi are the PIs from NetApp and WSU respectively.

The Importance of Algorithms when Searching for IT Solutions

Algorithms were originally developed by mathematicians for mathematicians. Thousands of years ago the Greek mathematician, Euclid, devised the very first algorithm, which was used to determine the lowest common denominator in a set of numbers.

The use of algorithms is now going through a dramatic revolution.  Essentially a set of step-by-step instructions, algorithms are used in a multitude of applications we use every day.  When I take a photograph, for example, the Face Detection Algorithm systematically identifies the components of a face, regardless of size or shape, and thereby enables the camera lens to focus.

A plethora of search engines, such as Alta Vista, Bing, Google, Lycos, Magellan, Yahoo! and countless others rely on algorithms.  In 1995, Sergey Brin and Larry Page developed the PageRank algorithm and a more efficient search engine was born:  Google.  Today, this ranking algorithm is responsible for 3.5 billion web searches each day.  Its efficiency relies on its ability to look at incoming links and determine the relative importance of those pages, based on the frequency of other relevant links.

Searching for IT solutions requires new algorithms to discover a “feature document.”  A feature document is an expression of an IT solution that is readable by search engines for retrieval and which can execute the operations: monitor; manage; and provision.

The Role of SEO in IT solutions

In order to understand the complexities IT professionals face today, consider the relevance of Search Engine Optimization (SEO) to end users.  Millions of web pages are available, so SEO is essential.  Securing top positions for the right search terms makes it easier to find the correct solution.

Conversely, millions of solutions are available to IT professionals, yet there is no search engine available today that is able to retrieve the correct tools capable of managing the Third Platform.  What are the items a search engine provides today?  Typically, these include: documents, web pages, videos, maps, music, etc.  Each of these items needs some type of user agent to make it usable.  In order to create an IT solution, it is necessary to first understand what the demands of the user agent are.  Specifically, the IT solution user agent must be able to display the solution very much like a typical web page and—more importantly—enable one or more of the following operations: monitor; manage, and provision.

Emerging from browser technology today is a new capability called “responsive UX,” which is made up of individual building blocks called widgets. Popular frameworks including Google’s Angular JS make it possible to use these widgets to create a system for executing tasks that can monitor, manage and provision an IT infrastructure.

Using these widgets it is possible to express an IT solution. This requires the widgets to enable describing the IT solution.  Then, enable the IT solution to execute changes in the infrastructure.  And ultimately, to enable the IT solution to be searchable in the form of a “feature document.”  A feature document is an expression of an IT solution that is readable by search engines for retrieval and which can execute the aforementioned operations: monitor; manage; and provision.

The Google Generation & IoT

“Google generation”1 is a popular term referring to a generation of young people, born after 1993, who are growing up in a world dominated by the internet.  For them, constant connectivity–being in touch with friends and family at any time from any place and any device–is of utmost importance2.  According to Wikipedia, the phrase has entered popular vernacular as “a shorthand way of referring to a generation whose first port of call for knowledge is the internet and a search engine.”

This Google generation-driven demand for enhanced capabilities is placing increasing pressure on IT professionals to find more intuitive, AI-enabled ways to support the growing number of devices, applications and the “internet of things” (IoT).  The IoT connects devices and data, integrating business systems across any platform or operating system.  In essence, IoT promises to revolutionize business practices by linking physical and digital worlds together and bringing so-called “Big Data” to the next level.  Now, businesses will have more tools at the ready to help them un-tap the potential of information they are already gathering, allowing them to improve operating efficiency and deliver more robust end-user applications.  McKinsey Global Institute estimates that applications enabled by IoT will generate between $3.9 and $11.1 trillion per year in new economic value by 2025.2

Clearly, the confluence of enabling technologies will demand a robust, searchable and scalable IT solution to execute changes in infrastructure.

  1. Hamid Jamali, “The Google generation: the information behavior of the researcher of the future,” ResearchGate, accessed June 16, 2014, R Jamali/publication/215500461
  1. Jason Frand, “The Information Mindset,” 2000, September/October 2000, p.15.

Enabling Technology Convergence

The development of algorithms and other technologies comprising Artificial Intelligence (AI) are driving the paradigm shift to the Third Platform.  These enabling technologies are central to creating more end-user friendly and efficient IT solutions supporting the applications consumers use every day.

AI powers everything from taking a picture or sending a text on a smart phone, to searching the web, using Facebook, taking a trip or even saving lives by enabling a donor matching database.

Demand for more innovative end-user solutions that improve efficiency and cost-savings is driving advances in—and the convergence of—enabling technologies, including AI, Search Engine Optimization (SEO), speech recognition, natural language, and an up and coming functionality One Click Provisioning (OCP), among others.

Creating a responsive User Experience (UX) technology stack to support AI-enabled applications requires the use of accurate data input about end-user behavior, patterns and practices.  Whether it is supporting the speech recognition used in Siri, the auto-fill feature in a text message or when completing an online form, AI relies on an accurate model of the user’s goals and is meant to more intelligently anticipate and interpret the user’s actions based on these goals.

If accurate data are not used in its design and development, attempts to enhance the UX can backfire and alienate a user from the solution provider.  For example, if a job search website repeatedly pushes irrelevant listings to my inbox, I will most likely ignore future email messages from that site.  The problem is that the provider gathered incomplete or inaccurate data when they “fished” my name and address from the back door of a site where I am a registered user.

Evolution of the Third Platform

Evolution of the Third Platform

The development of mainframe computing in the 1950s came to be known as the First Platform. Its development revolved around the needs of the enterprise, with very little focus on the specialized needs of end-users.

Driven by the promise of lower cost of ownership and simplified deployment and use, the Second Platform emerged in the 1980s with a marked focus on developing Personal Computer (PC) technologies. Continuing evolution is causing a significant paradigm shift toward meeting the needs of end user in the IT landscape.

Today, the trend toward improving end-user experiences and IT efficiency has led to the development of the Third Platform. First identified and described by International Data Corporation (IDC) in 2007, the Third Platform is defined by the following four pillars:

  1. Support for a plethora of mobile devices
  2. Social networking
  3. Cloud computing
  4. Big Data and associated analytics, such as Search Engine Optimization (SEO)

The consumer with limited IT expertise is driving the evolution of the Third Platform and growing demand for simplified, automated solutions. For example, the ability to manage cloud computing solutions using terms they understand without requiring intervention by highly trained IT professionals. Until now, simplified solutions to meet these needs have not been available.

However when combining a responsive UX technology stack with machine-assisted learning & search engine technologies, it is possible for the IT professional to state a management goal in natural language, and the system will respond seamlessly with options that optimize performance and/or capacity.

This solution is ideally suited to address the needs of the consumer as well.  As hardware and software firms adjust their business models to align more closely with the needs and demands of consumers before those of the enterprise, the blending of responsive UX technologies with artificial intelligence is enabling Goal Oriented UX experiences.

AngularJS Directive Template vs TemplateUrl (synchronous vs asynchronous)

As one would hope the output of the template  and templateUrl is the same, its just that one has to be careful about when the controllers and link function are available.  Run this example JSFiddle and see.

template -synchronous lifecycle: Angular uses a depth first approach for DOM compilation and the compiler instantiates the each controller until the DOM is traversed. Then the link function of deepest directive is instantiated and angular traverses the DOM in reverse order instantiating each link function.

templateUrl -asynchronous lifecycle: Angular uses a depth first approach for DOM compilation and the compiler instantiates the first controller but then assumes an asynchronous response from the templateUrl and immediately instantiates the first directive’s Link function. The DOM is traversed where the compiler instantiates each directive’s controller and than it’s link function.
The example shown in JSFiddle  defines three directives using template option.  Each template loads in another directive. The output shows the order the compiler instantiates the controllers from the parent to the child  in a descending order and then instantiates the Link functions from the youngest child back to the parent.

Directive A Controller
Directive B Controller
Directive C Controller
Directive C LinkFn
Directive B LinkFn
Directive A LinkFn

The code then defines three directives using the templateUrl option to load the directives.  The output shows where each directive’s controller and link function are instantiated which is only in descending order.

Directive A URL Controller
Directive A URL LinkFn
Directive B URL Controller
Directive B URL LinkFn
Directive C URL Controller
Directive C URL LinkFn

Final thoughts

Angular best practices recommends using templateUrl and since angular will store the html in templateCache it makes sense to preload the html using something like grunt JavaScrip / HTML2JS. The design of the directives should assume:

  • Expect child directives to be complied at some unknown point in the future
  • When the template does arrive understand the scope could be very different
  • Do not assume variable bindings exist in the DOM at the time the controller or link Fn are executed.

If the templates are preloaded, all the child directive’s will be available by the end of the digest cycle.