Archive for the ‘Flex’ Tag

Visual Programming Environments For Kids

Well, I happily finished my first semester at the Georgia Institute of Technology (GIT) at the beginning of May. It was a great semester, I coded moreso than I have in ages, had to relearn C/C++, and added LISP to my repertoire. The courses at GIT are project intensive so I was able to do some fun stuff such as writing a multi-threaded Web-Server that communicated with a Proxy Server via shared memory, implementing an inference engine in LISP, LLVM passes for detecting infeasible branches due to correlated predicates and parallizable loops, and testing out some cool robots for a HCI project that I worked on with a group of four other Georgia Tech Grad students called Bots-For-Tots. Its the last one that I’ll focus on for purposes of this entry. The project had us go through an user focused process from analysis, to design and selection, to prototyping. The end result was a programming environment we called Bot-Commander, which leveraged open source technologies MERAPI & LEJOS and of course AIR to enable children (ages 3-8) and myself 😉 to easily program a Mindstorm robot. Considering that I have 3 children, ages 1, 4, and 6 and like most geeks am immediately drawn to words containing “bot” this project was close to heart. For those of you interested in bots and/or child education below is selected content from the project.

My co-contributers included: Albert Brzeczko, Basil Udoudoh,
Dimuthu, & Bryan Hickerson

The Premise

How do we teach children technology? is a basic question that, as the ubiquity of computing in 21st century progresses, more and more parents and educators are grappling with at earlier stages of a child’s development. On the one hand the question is very important assuming that a key measure of a communities success is the number of technologists (e.g. engineering, computer science, etc…) that it outputs. On the other hand the question can be considered irrelevant since children are bound to learn “enough” about technology embedded into their communities society and culture through its application. However, another consideration is whether the question “How do we teach children technology?” is the right question to be asking ourselves. The concern is that we risk teaching technology as a set of abstract concepts that are difficult for children to learn, internalize, and apply. What is lost is that technology has the ability to serve as a platform for children at all ages to apply creative thinking across multiple disciplines and interests. That ability is largely untapped today. While there has been substantial work in leveraging technology as a learning platform for older children in certain areas, the solutions have fallen short in terms of enabling younger children (ages 3 and up) and being adopted in the mainstream throughout schools and in the home.

It was in the early 1980’s when Seymour Papert published his influential book, “Mindstorms: Children, Computers, and Powerful Ideas.” In the book Papert gave rise to a new mode of thinking called constructionism from which he argues that through technology children can now integrate the mechanical with the digital to create personally meaningful projects from which they can problem solve, test, and create new ideas and conceptual models of the world. Personally meaningful projects are those that children are driven to work on out of personal interest. Paperts research led to the creation of the Logo programming language, which was designed to be powerful enough for research yet simple enough that it could be used by children. The language was popularized with the introduction of a virtual turtle that children would teach to draw via programming.

From programmable turtles to bricks, crickets, and cats the concepts introduced by Papert have led to a host of constructionist environments that help children to learn about learning by teaching robots how to interact with the world in which they live. Through working with Papert, Lego Inc. introduced a commercial robot construction kit called the RCX Programmable Brick. Other lesser-known robot kits have become available as well and virtual robots, similar to the Logo turtle, are now free for downloading. Common constraints in all of these environments are that they typically target an age group of 8 and above and require a high degree of investment by not only the child, but also the educator (parent or teacher) in terms of training and time. In this project we will be investigating these existing tools with the goal of designing a constructionist environment that that not only targets younger children, but also reduces the cost to both the children and educators in terms of training and time, resulting in a product that is less prohibitive to mainstream usage.

The Focus Group

This section discusses the qualitative methods used for exploring the problem. We needed a focus group and we needed one fast.
The solution was a Bots-For-Tots pizza & ice cream party at my house, where I invited a bunch of my son’s classmates over (informal but it worked).

Finding Bots

Our impressive inventory of bots consisted of Lego Mindstorm and MIT’s Scratch program, which gave us a physical and virtual robot respectively. However, Lego Mindstorm targets children at the ages 10+ which we knew would likely be well above the children’s capabilities. We needed additional bots that targeted a younger age group to give an more accurate account of the current state of constructionist toys. The answer(s) lay in the acquisition of two additional robots; Pico Crickets from the Playful Invention Company and the Roamer from Valient Inc. both of which prescribed to constructionist ideas and concepts.

Composition of the Group:

  • Ages ranged from 3 to 6.
  • All the children were boys
  • All of the parents involved professed a strong interest in their kids learning about technology.
  • Two of the parents had a job directly involved with technology, while others worked in the fields of medicine, psychology, and public relations.


The plan was simple.

Step I – Grease & Sugar
First, we served all of the parents and their children Pizza and Ice Cream providing us the opportunity to talk to the parents and children about their respective background while waiting for everyone to show up.

Step II – Constructionism 101
Next, we did a learning activity that did not involve computers or actual robots. The purpose of the activity was:

  1. Learn about the children’s ability to comprehend technical concepts
  2. Provide an overview for kids and their parents of how each of the toys worked
  3. Have fun!

We asked for volunteers from the children for both a robot and a set of computer programmers. One child volunteered as a robot, while the rest volunteered as computer programmers. We then brought out two poster boards, one empty one titled “Program” and a second one containing a set of square cut outs velcro-ed to the back labeled “Blocks.” We asked the children whether robots could “think” like we do. The children’s answers were mixed, but we explained that robots cannot think by themselves and that they need computer programmers to help them think by telling them what actions to take (i.e. creating a program). The children then participated in building a program by choosing action blocks from the “Blocks” poster-board and moving them to the “Program” poster-board. After the “computer programmers” were done creating their program we had our acting robot execute it through by stepping, jumping, growling, and barking as instructed by the program.

Programming 101

Programming 101

Step III – Breakout Time
After our brief course in robotics and programming we gave a very brief introduction to the different robots we had around us and then broke up, letting the children gravitate to the robots that interested them the most. We had the five following robot stations setup.

Station I – Lego Mindstorm

At the core of Lego Mindstorm is a programmable brick that has the capability of accepting input from 3 sensory devices and controlling up to three output devices (i.e. motors). While the brick has an interface for building programs directly on it, more typically users will use the Mindstorm programming environment to build a program and deploy it to the brick using either a USB cable or bluetooth connection.


Target Age Group: 10+

  • Providing a quick look at Mindstorm proved to be the most problematic for two reasons. First, if the robot was left idle it would shutdown at which point you have to re-establish the bluetooth connection to demo it. This turned out to be an inconvenient interrupt which required that we ask everyone to please wait while we re-established the bluetooth connection. Second, when using the visual programming environment for Mindstorms each action/block has a great deal of configurations, which were often difficult to see on a large screen and impossible to walk through with young children.
  • Parents and children were intrigued with the possibility of creating a humanoid robot as shown on the Mindstorm Box.
  • Parents found the lack of organization and hundreds of small pieces for Mindstorm to be daunting.
  • Surprisingly the children showed no interest in Lego Mindstorm once we broke up across the different stations.

Station II – Pico Crickets

Similar to Mindstorm, Pico Crickets leverages a Visual Programming interface, motors, input sensors, and legos. However, Pico Crickets has two distinct differences. First, it is target towards a younger age group of 8+. Secondly, Pico Crickets strives to work with the artistic capabilities and intuition of children rather than purely mechanics (i.e. gears and motors).

Pico Crickets

Target Age Group: 8+


  • Out of all the robots, Pico Crickets held the most attention not only from children, but also parents (one parent actually built a 7 step program). Three children spent a substantive amount of time on Pico Crickets.
  • The organization & number pieces in the Pico construction kit was much less daunting than that of Mindstorm.
  • Children seemed to either want to play with the lego pieces or the programming environment (to create pretty programs) but did not seem to correlate the relationship between the two.
  • The children using the programming environment did so to in the same way that they use legos, they were snapping together virtual blocks to create diagrams not to execute them.
  • One parent remarked that they liked the toy but felt that it required to much hand holding for the children.

Station III – Scratch IDE

Scratch is a programming language developed by the Lifelong Kindergarten Group at the MIT Media Lab to help young people learn how to develop computer programs. The development of Scratch (and its name) was inspired by the scratching process that DJs use to create new sounds and music by rubbing old-style vinyl records back and forth on record turntables, creating new and distinctively different sound out of something that exists. Similarly scratch projects allow young developers to mix together graphics and sounds, using them in new and creative ways.


Targeted Age Group: 8+


  • There was one child that played with Scratch, but that was the only robot that he played with the entire time. While he seemed to enjoy making Scratch (the out-of-box virtual cat robot) do stuff, he particularly enjoyed the more personal aspects of scratch that enabled him to upload his own picture and record his own voice to use in a program. Note: however that uploading his own picture is still a complex process for which he needed help.
  • Parents did show interest in the fact that scratch was free.

Station IV – Valiant Roamer

The Roamer is a commercialized version of the physical dome shaped robot that Papert initially worked on at MIT while designing the logo language. While there is a visual programming environment for Roamer, similar to the other robots, there is no link between the visual programming environment and the physical robot. The programming environment, called Roamer World, is simply a simulation of the physical robot in a virtual world. The programming interface for the physical roamer is are the keys located on top of the robot.



Targeted Age Group: 4+


  • Both parents and children attempted to use the Roamer once but then quickly left for another toy once it did not do as they intended (which happened to always be on the first attempt).

Station V – Wacky Wigglers Building Set

Now here is a robot that you can find in your typical toy store. While the Wacky Wigglers Gears set would not be considered a constructionist kit due to its lack of an actual programming interface we still wanted to place it out there to see how children would respond to mechanical aspect to it. The objective of the Wacky Wigglers Building Set is to piece together a robot with a whole lot of gears which you can then control using a basic forward, backward, and turn motion remote control.


Target Age Group: 5+


  • There was substantive interest in the Wacky Wigglers Building set. At least three children spent time successfully putting together parts of a robot. One child in particular committed himself all the way through until the robot was complete. Note: there was no adult involvement in constructing this robot.

Dispelling Myths (another observation)

Another interesting observation that we all made at the focus group was that several children came to the party with a preconceived notion of what a robot was and it didn’t fit with the ones that we had prepared for the children. Instead 3 of the children assumed that robots were human looking and dangerous.
It is our hope that the focus group has given them a different notion of what a robot can be.

The Design Stage

In this phase of the project we brainstormed up three different possible designs to tackle the problem domain we were addressing. I won’t spend to much time on each of these because we we only chose one of them in the end.

Design I – IntelliBlocks

In brief the concept here was to implement a completely hardware based solution to alleviate the disconnect that children faced when interfacing between the computer and a physical bot. Rather than programming a robot using a computer, the program would actually become part of the environment/stage that the bot was running in (i.e. the program itself was physical). Below are some pics of some of the illustrations we put together for this design.IntelliBlocks1
IntelliBlocks2The first picture illustrates a lego board that a) implies the existence of a robotic train above it and b) contains an empty sequence block that can be used to program the train. We assume there are several actions that a train can perform and that children ages 3-8 would be aware of such as (go forward, backward, whistle, etc…). The second picture illustrates the use of blocks representing those actions to build a program that commands the train to loop around the train track until it senses that it is near a station at which point it blows its whistle and completes.

Design II – SoftBots

In this design, similar to above, an objective was to alleviate the disconnect that children faced when interfacing between the computer and a physical bot. However, rather than implement a completely hardware based solution in this design we proposed implementing a completely software based solution.


The picture above shows a hacked up illustration that is somewhat similar to Scratch, however our goals were to 1) improve upon the personalization capabilities by reducing the steps needed for children to record audio and take snapshots of themselves and 2) provide higher level abstractions than scratch by not treating all objects generically as a sprite, but rather for the environment to be aware of the capabilities of any given object on the stage and to know what capabilities to make available based on the combination of objects on the stage. Consider in Scratch if you had a Martian sprite and a gun sprite, one solution program the Martian to pick up the gun would be to tell the Martian sprite to move in the direction of the Gun until a color was detected and then to switch the “costumes” of the sprite to show it holding the gun. Rather we would prefer the programming environment knowing that there was a Martian bot and a Gun Bot and accordingly enabling the capability for the Martian to pick up the gun by making a high-level action “Pickup Gun” available when the Martian is selected.

Design III – Bot Commander

See Prototype

Prototyping Time

The Bot-Commander was a software/hardware based design that ended up being what we believed to be the most effective and feasible design solution that we could prototype within the given time frame (<2weeks) and with the available resources. Moreover, thanks to a presentation given by Andrew Powell around MERAPI, Mindstorm, and AIR at a recent AFFUG meeting we had heightened confidence that our goals could be achieved.

Jumping right to it, the programming environment (as an alternative to the IDE provided out of the box with Mindstorm) is shown below.



Note that the user has a set of actions on the left hand side that he or she can drag onto a canvas. There are actions for movement, sound, an sensors. The program above will wait until a sound (such as a clap) occurs then move the robot forward, to the right, in a circle, laugh, cry, and finally, play a tune.

Architecture Talk

Before considering usability we will start off with a high level view of the architecture of the prototype, which is reflected in the diagram below.

Bot-Commander Architecture

Fortunately, from an architectural perspective there was a great deal of functionality already available in the community that we were able to leverage in order to prototype Bot-Commander. Here is a brief summary of the various components that made up the Bot-Commander architecture.

  • Bot-Commander – This is the UI that was implemented by the Bots-For-Tots team to effectively replace the Mindstorm visual programming environment with an alternative that is targeting younger children (ages 3-8). The UI was implemented using Adobe’s Flex/Actionscript technology and is hosted within Adobe’s Integrated Runtime (i.e. AIR), providing the best of two worlds; the web and power of desktop computing. By leveraging AIR Bot-Commander can tie in more closely to the users desktop to interact with Merapi and LeJOS.
  • Merapi – Not only is Merapi an actual volcano on the actual island of Java, but it is (more importantly this team would argue) a bridge between Adobe AIR applications and Java. Merapi has been designed to run on a user's machine, along with an Adobe AIR application and provide a direct bridge between the Adobe AIR framework and Java, exposing the power and overall capabilities of the user's operating system, including 3rd party hardware devices.
  • Bot-Command Generator – Implemented as a Merapi message handler the Bot-Command Generator is responsible for interpreting a sequence of actions deployed from the Bot-Command UI, generating a Java program from those actions, and then compiling, linking, and uploading the compiled binary up to Alpha Rex (via LeJOS).
  • LeJOS – As and open source Java programming environment for the Lego Mindstorm NXT LeJOS was critical for us to get up and running a prototype. LeJOS allows Java developers to program Lego robots. LeJOS consists of:
    • Replacement firmware for the NXT that includes a Java Virtual Machine.
    • A library of Java classes (classes.jar) that implement the leJOS NXJ Application Programming Interface (API).
    • linker for linking user Java classes with classes.jar to form a binary file that can be uploaded and run on the NXT.
    • PC tools for flashing the firmware, uploading programs, debugging, and many other functions.
    • A PC API for writing PC programs that communicate with leJOS NXJ programs using Java streams over Bluetooth or USB, or using the LEGO Communications Protocol.
  • Tiny VM – An open source, Java based replacement firmware for the Lego Mindstorms RCX & NXT microcontrollers. TinyVM's footprint is about 10 Kb. The project was forked into LeJOS back in 2000, where the Tiny VM is now a component of a larger architecture for programming Mindstorm robots.

  • Alpha Rex – Known as Roby by one of the team member’s kids, Alpha Rex is the robotic hardware that children can now program using Bot-Commander. Mindstorm robots can take on many forms, but Alpha Rex was chosen for this project due to a) his humanoid form, which often invokes curiosity in both adults and children and b) his maximization of the use of sensors and motors.

There are few other things to note with respect to the architecture diagram above. First, outside of Alpha Rex, everything else is running as one application on the user’s desktop. Second, Merapi, Bot-Command Generator, and LeJOS are running in co-operative process that is hosting an instance of the Java VM. Communication between the Bot-Commander UI (running in AIR) and the Java components happens through passing serialized objects in the Action Message Format (AMF, a format for object remoting) over sockets. Third, communication from the Desktop to Alpha Rex happens over either a USB cable or Bluetooth. In both cases, LeJOS leverages open source projects to implement the communication.

Usability Talk

It was interesting to see during our initial focus group that one of the robots that cultivated the least amount of interest from children 3-8 was the Mindstorm robot. This is interesting because Mindstorm is a) supported by a well-known toy manufacturer (i.e. Lego Inc.) and b) the most popular of any robotics kit among teenagers and/or adults. However, its not surprising from the point of view that Mindstorm does not target children as young as 3-8.

So why you might ask did we decide to use Mindstorm as a basis for the prototype? The answer is simple; Mindstorm provides an extensible environment from which to build effective prototypes, extensible enough to the point of replacing its visual programming tool.

While we identified a good number of issues at our focus group with children ages 3-8 using robots and they’re respective programming environments, for purposes of the prototype we attempted to address only few of the more critical issues that we saw. The goals were to provide enough to a) complete phase IV of the project and b) provide an environment that kids can begin enjoying now.

Issues the prototype was addressing:

  • Hardware Abstraction – While children in our target age group tend to understand and able to identify objects such as trains, cars, dolls, and yes even a robot, none of the children that we interacted with in our focus group were familiar with more primitive objects such as gears, motors, and sensors (not to mention that many of those primitive objects are choking hazards;-). Having to deal with primitive hardware objects posed a significant barrier to a child’s success in accomplishing their desired end goal of programming the robot.
  • Connectivity – Both of the robot kits we had that had both a hardware and software component had connectivity issues. With Mindstorm in particular children were confused when the robot had automatically turned off and we had to explain to the children that the Bluetooth connection needed to be re-paired. The children moved onto the next robot, while the connection was fixed, but never made it back.
  • Layout – Each of the programming environments that the children used in attempting to program the robots had varying levels of complexity, with Mindstorm being the most complex. Children did not understand how the placement of the actions in the program meant different things (e.g. connecting actions made a sequence, while disconnected actions implied parallelism).
  • Software Abstraction – Mindstorm in particular had very primitive programming constructs (i.e. actions). If a robot has a claw the child might expect to have a “close claw” and “open claw” actions, however, in Mindstorm for example almost everything is controlled as motor A, B, or C.
  • Keyboard Usage – We found that children could play effectively with a visual programming environment such as scratch when the majority of user interactions they performed were done through the mouse. More complex interactions that involved typing with the keyboard acted as a barrier to accomplishing the primary tasks of creating a program.

The following sections will briefly cover what we did with the prototype to overcome these issues:

Hardware Abstraction

To overcome the issue of dealing with primitive hardware objects we are assuming that children are starting off with a complete robot. In the context of Mindstorm this means that the robot has already been built (bypassing a major step). While it would seem quite feasible to imagine that children might still piece together robots using less primitive objects such as a claw piece (which is more of an accessory/attachment for an existing robot rather than a building block), nothing such as that is currently available for Mindstorm and for purposes of the prototype we assume a complete & already accessorized robot.

We started off with:

Mindstorm Out-Of-Box

Mindstorm Out-Of-Box

And have ended up with:

Alpha Rex Finished

Alpha Rex Finished


We have for the most part eliminated the complexities of connectivity at this point, by not requiring a connection to be configured. Instead, once the user has decided to run their program we dynamically look for the Robot over either USB and/or Bluetooth and upload the resulting program. Preferably we would also have a means of showing the user when one or more robots is detected in the area by polling the Bluetooth connection every few seconds assigning them default user friendly names, however we have not added this feature as of yet.

Below is the Bot-Commander UI which shows the Run button that will be used to find the Mindstorm robot, upload the program (that would be designed on the right hand canvas), and then run it.


Here we attempted to make the layout as easy as possible. Of course implementing any type of diagramming tool in the relatively short amount of time we had for this phase of the project was challenging. At this point, users can simply drag actions from the left hand pane of the Bot-Commander and drop them onto the design canvas on the right hand side. The difference between our implementation and the visual diagramming environments of the other tools we looked at in the focus group is that sequence is assumed as you drop actions onto the right hand side. Users do not have to visually attempt to snap pieces together or draw edges between actions separately. Instead, they drag and drop, sequence is assumed, and the edges are drawn automatically to reflect the assumed sequence. A major constraint at this point, however is that we do not allow re-ordering without starting from the beginning.

Software Abstraction

Mirroring the hardware primitives, all of the visual programming environments we looked at in the focus group also had software primitives to deal with. Even Scratch, which did not force the use of hardware, but primarily dealt with virtual bots did so at a fairly primitive level (e.g. all “bots” are actually sprite/2d image objects with a fairly limited set of capabilities that are generalized across all sprites). Consider in Scratch if you had a Martian sprite and a gun sprite, to program the Martian to pick up the gun would require telling the sprite to move in the direction of the Gun until a color was detected and then to switch the “costumes” of the sprite to show it holding the gun. Rather we would prefer the programming environment knowing that there was a Martian Robot and that he/she had the capability to pick up a gun and reflect that by making a high-level action called “Pickup Gun” available.

For the purposes of this prototype we are limited to abstracting away the configuration required to do move operations in Mindstorm. Rather than having a single Move action (as is the case in Mindstorm) that requires the user to know what motor they are dealing with, how many revolutions to perform, and in what direction we are summing up this behavior in two separate actions; forward and backward. Bot-Commander assumes the number of revolutions to make the robot step in either direction and even the direction itself is assumed based on the type of robot built (i.e. Alpha Rex).

Keyboard Usage

Currently, everything that can be done in the Bot-Commander can be done via a Mouse. It is our intent to maintain that constraint on the design as much as possible.


While we have been successfully in getting the base architecture implemented and addressed some usability concerns, there are a good number of relevant features that we were not able to get to at this point. In particular, we were not able to get to the personalization features which we found to be quite effective for children with both Scratch and Pico Crickets. In addition, we did not have any graphic artists on our project team to create effective images to represent the actions within Bot-Commander. Packaging up the configuration and making an installer for this app would take some extra effort as well since there are platform specific dependencies with respect to USB and bluetooth drivers.

Kudos to MERAPI and LEJOS

At this point I felt obligated to throw some Kudos at the two open sourced projects we used, both greatly accelerated the the rate at which we could work and did as advertised. Its not to often that you use software and it just works. Both Merapi and LEJOS did just that.


Well I was to lazy to do much editing, instead copying and pasting over sections. Just to sum up our work above we ended the project by performing some user & heuristic evaluations, providing us with feedback on the prototype. Unfortunately that aspect was rushed and while useful was not unbiased (i.e. my kids were the only ones to do user evals) That is where the work ended! However, I am hoping to translate this work over to an example within an investigation I am doing on Domain Specific Languages.


Here are some references for those interested.

UDDI Integration with LiveCycle

On several occasions I have been asked about the possibility of integrating LiveCycle ES with UDDI to provide a standards based way of browsing LiveCycle services. Well I figured MAX2008 was a good motivator for getting such an integration working and so that is what I kicked off several weeks prior to MAX. I decided I would build a LiveCycle Component that allowed for both the publishing and querying of data to and from UDDI from within LiveCycle. Unfortunately, I quickly realized that there still seemed to be limited tooling around UDDI. So while, I could use a complete Java implementation of the UDDI specification from Apache, JUDDI, there was no easy means for me to browse the registry to show the results. This inevitably lead to the 2nd part of this proof of concept which was to build a UDDI browser with Flex. Note: there is an open source Java UDDI Browser available at that works well, however using it for a UDDI LiveCycle demo didn’t seem right for my purposes 😉

You can view an entire walkthrough at Or the following lower res videos at YouTube.

So why did I create this demo?

  1. To provide a Proof of Concept for Integrating LiveCycle with UDDI
  2. To Discuss features and concepts around the LiveCycle Registry
  3. Learn How-to(s) with Flex and Web-Services (i.e. another excuse for me to improve my Flex chops)

IMPORTANT DISCLAIMER: This was a demo done as a proof of concept for which I am making the source code available. However note that the LC Component, and UDDI browser need a lot of fine tuning, which I have not and probably won’t have the time to complete anytime soon. The demo did accomplish its goal of proving that it is not only possibly but very feasible to integrate LCES with UDDI.

Below is a high-level diagram for the architecture of the UDDI/LCES proof of concept.

LCES UDDI Architecture

LCES UDDI Architecture

  1. JUDDI: A Java based implementation of the UDDI 2 Specification which was used for purposes of this demo. You can find more info about JUDDI at
  2. UDDI Component: A custom LiveCycle component deployed in a LiveCycle Instance capable of querying the LiveCycle Registry and publishing service meta-data to a UDDI Registry
  3. UDDI Browser: An AIR application for browsing Businesses, Services, and TModels in a UDDI Registry.

LiveCycle ES Registry
The “Collective” Registry within LiveCycle ES is made up of at least 6 sub registries that store meta-data around the components deployed within ES. The meta-data in the LiveCycle Registry is used at both Runtime (e.g. Service & version resolution) and Designtime (e.g. to build composite applications).

LCES Registry

LCES Registry

  1. Component Registry: Stores the base information relevant to a component such as component id, title, description, etc…
  2. DataType Registry: DataTypes are Java classes that are exported from a component and that can be leveraged by the LCES tooling
  3. PropertyEditor Registry: Property Editors are UI elements implemented in Java that control the visual rendering of types and properties within LiveCycle ES tooling.
  4. Connector Registry: Connectors are integration mechanisms that define a means by which to invoke a LiveCycle service. Example connectors include EJB Connector, SOAP Connector, and VM Connector.
  5. Service Registry: Maintains all the meta-data we have around services such as the signature of a service, description, hints, config values, etc…
  6. Endpoint Registry: Stores configuration necessary to bind a service to one or more connectors. This provides for the loose coupling between service implementations and the means by which they are invoked (i.e. Connectors).

Trying out the UDDI Proof Of Concept
Unfortunately, I have not setup an environment where demos such as this are available online.
For now however you will need to do the following steps:

  1. Download and install the LiveCycle trial (if you haven’t already) from
    Note: This demo was built on LCES Update 1 (also known as 8.2.1)
  2. Download the Source Code for a) the AIR app and b) the LiveCycle Java Component (i.e. uddi-dsc.jar) used from Download Source Code Note: uddi-dsc.jar contains the related Java code within
  3. With LiveCycle up and running go to LiveCycle Workbench and click on Window–>Show View–>Components. In the Components view you can right click the top node to then install the Java Components downloaded (i.e. uddi-dsc.jar) Note: You will need to configure the UDDI3Service by right clicking it in the components view, setting the user/password expected by JUDDI (‘admin’/” for me), and setting the publishAsHost & publishAsPort (used to fill int the WSDL URL in the UDDI Registry)
  4. Import the Flex Project included in the Download to your Flex Builder environment
  5. Run the AIR APP!
  6. Oh wait….. Before any LC Services are available in the UDDI Registry you need to invoke the UDDI2Service.publishLiveCycleService. You can do this from the “Components View” in Workbench, however, you need to first turn on the unsupported workbench option –> -Dcom.adobe.workbench.unsupported.dscadmin.invoke.enable=true in the workbench.ini file

Anyway, Good Luck! there is a lot to play with there.

Few more notes for those digging deeper:

  • I packaged two modified WSDLs from which the WebService ActionScript stubs were generated within Flex Builder. I had to modify the WSDL to get around issues with the decoding of arrays.
  • If you need to regenerate the WebService ActionScript stubs then you will need to modify the src/webservices/inquiry/ file to change the isWrapped properties of the WSDLMessages to false rather than true.
  • The calls between LiveCycle and JUDDI seem to be slow on the perf side, but I haven’t drilled into that aspect as of yet

The LCES Pet Store & Process Oriented Application Development!

This is one of three demos that I did at MAX 2008. Unfortunately, I did not make it through all the demos due to technical issues (i.e. I should have come earlier to test out the gear). Enough of the excuses though, hopefully people enjoyed what I could show and now here is the source 😉

The primary purpose of this demo was to show a) a “traditional” enterprise app being built solely on top of LCES and b) the diverge from typical Data Oriented Applications that interact directly with the underlying DB to Process Oriented Applications that leverage Long-Lived processing to build a more rich end-to-end experience.

Click HERE to download the source code.

Note the download is a zip file ( containing 3 files:

  1. (My Flex Project) – This App is currently hardwired to talk to localhost.
  2. petstore-dsc.jar – The LiveCycle Data Management Services Assembler that Creates, Reads, Updates, and Deletes Pets from the DB along with the Java source. This DSC also creates the underlying DB table when it is installed, however the DDL is generated for MySQL only currently.
  3. PetStore.lca – The LiveCycle Application Archive that contains the Pet Verification Process and XFA Form used in the Application

The Architecture
Below is a slide of the overall architecture.

LCES PetStore Architecture

LCES PetStore Architecture

Note that only the highlighted boxes are complete in the demo (sorry I didn’t get to the rest;-( .
A brief description of the highlighted Boxes are:

  1. The LCES PetStore AIR application
  2. The Pet Verification Process – A long lived process that generates a form/workitem that is routed to the store clerk (Tony Blue)
  3. The Pet Detail Form – the one that is rendered to Tony Blue
  4. The User Service – An out of the box service used to make User Assignments as part of a process
  5. LiveCycle Workspace – An operational UI provided out of the box for users to manage workitems and participate in long-lived processes.
  6. The PetService – A Custom service that implements the CRUD operations necessary to manage Pets in the Database and to push them to clients via LiveCycle Data Management Services.

For purposes of this demo I decided to use Mate. I was originally motivated by the excellent presentation that I saw from Laura Arguello at the Atlanta Flash & Flex User Group back in September. This is my first time using Mate, so hopefully I paid it some justice here. At MAX 2008 I laid out the following slides to show how MVC related to LCES and Mate to LCES respectively.



Mate & LCES

Mate & LCES

Anyway, I have two more LCES demos to post over the weekend (the Zillow App and UDDI Browser), so keep an eye out!

Just Popped in My “Tour de Flex” Flash Drive from MAX 2008

Well I just got back from MAX2008 yesterday on the red eye and after a few meetings for work and re-connecting with my kids I decided to sleep for a good chunk of the day… I am now working on getting the LiveCycle demos I did at MAX2008 published out on my Blog as well. But before getting started on that my attention naturally was diverted to something that required less effort, that is popping in the “Tour de Flex” Flash Drive that I managed to snag from Greg Wilson, one of our enterprise evangelists, prior to boarding the red eye.

I was actually fortunate enough to receive a copy of Tour De Flex prior to MAX2008 through Greg and Holly Schinsky both of which worked endless hours on it and were my co-workers from the past at Q-Link Technologies where we built a leading edge Business Process Management (BPM) platform. I say fortunate because Tour De Flex can save a countless number of hours with its examples for someone like me, who still lacks adequate Flex chops and happens to be under pressure to deliver on some demos (like I was for MAX).

Tour De Flex

Tour De Flex

Anyway, I definitely recommend checking out Tour De Flex. If you don’t have the Flash drive because you didn’t manage to pick one up at MAX or couldn’t make it out to MAX download it from . While there was a ton of innovative demos and presentations at MAX (and I mean a TON), Tour De Flex stood out for me due to its ability to really reach out and get Flex and possibly other useful/related technologies (with the ability to publish samples) out to the masses, not to mention help out the RIA illiterates like myself ;-).