Archive for the ‘Robots’ Tag

Visual Programming Environments For Kids

Well, I happily finished my first semester at the Georgia Institute of Technology (GIT) at the beginning of May. It was a great semester, I coded moreso than I have in ages, had to relearn C/C++, and added LISP to my repertoire. The courses at GIT are project intensive so I was able to do some fun stuff such as writing a multi-threaded Web-Server that communicated with a Proxy Server via shared memory, implementing an inference engine in LISP, LLVM passes for detecting infeasible branches due to correlated predicates and parallizable loops, and testing out some cool robots for a HCI project that I worked on with a group of four other Georgia Tech Grad students called Bots-For-Tots. Its the last one that I’ll focus on for purposes of this entry. The project had us go through an user focused process from analysis, to design and selection, to prototyping. The end result was a programming environment we called Bot-Commander, which leveraged open source technologies MERAPI & LEJOS and of course AIR to enable children (ages 3-8) and myself 😉 to easily program a Mindstorm robot. Considering that I have 3 children, ages 1, 4, and 6 and like most geeks am immediately drawn to words containing “bot” this project was close to heart. For those of you interested in bots and/or child education below is selected content from the project.

My co-contributers included: Albert Brzeczko, Basil Udoudoh,
Dimuthu, & Bryan Hickerson

The Premise

How do we teach children technology? is a basic question that, as the ubiquity of computing in 21st century progresses, more and more parents and educators are grappling with at earlier stages of a child’s development. On the one hand the question is very important assuming that a key measure of a communities success is the number of technologists (e.g. engineering, computer science, etc…) that it outputs. On the other hand the question can be considered irrelevant since children are bound to learn “enough” about technology embedded into their communities society and culture through its application. However, another consideration is whether the question “How do we teach children technology?” is the right question to be asking ourselves. The concern is that we risk teaching technology as a set of abstract concepts that are difficult for children to learn, internalize, and apply. What is lost is that technology has the ability to serve as a platform for children at all ages to apply creative thinking across multiple disciplines and interests. That ability is largely untapped today. While there has been substantial work in leveraging technology as a learning platform for older children in certain areas, the solutions have fallen short in terms of enabling younger children (ages 3 and up) and being adopted in the mainstream throughout schools and in the home.

It was in the early 1980’s when Seymour Papert published his influential book, “Mindstorms: Children, Computers, and Powerful Ideas.” In the book Papert gave rise to a new mode of thinking called constructionism from which he argues that through technology children can now integrate the mechanical with the digital to create personally meaningful projects from which they can problem solve, test, and create new ideas and conceptual models of the world. Personally meaningful projects are those that children are driven to work on out of personal interest. Paperts research led to the creation of the Logo programming language, which was designed to be powerful enough for research yet simple enough that it could be used by children. The language was popularized with the introduction of a virtual turtle that children would teach to draw via programming.

From programmable turtles to bricks, crickets, and cats the concepts introduced by Papert have led to a host of constructionist environments that help children to learn about learning by teaching robots how to interact with the world in which they live. Through working with Papert, Lego Inc. introduced a commercial robot construction kit called the RCX Programmable Brick. Other lesser-known robot kits have become available as well and virtual robots, similar to the Logo turtle, are now free for downloading. Common constraints in all of these environments are that they typically target an age group of 8 and above and require a high degree of investment by not only the child, but also the educator (parent or teacher) in terms of training and time. In this project we will be investigating these existing tools with the goal of designing a constructionist environment that that not only targets younger children, but also reduces the cost to both the children and educators in terms of training and time, resulting in a product that is less prohibitive to mainstream usage.

The Focus Group

This section discusses the qualitative methods used for exploring the problem. We needed a focus group and we needed one fast.
The solution was a Bots-For-Tots pizza & ice cream party at my house, where I invited a bunch of my son’s classmates over (informal but it worked).

Finding Bots

Our impressive inventory of bots consisted of Lego Mindstorm and MIT’s Scratch program, which gave us a physical and virtual robot respectively. However, Lego Mindstorm targets children at the ages 10+ which we knew would likely be well above the children’s capabilities. We needed additional bots that targeted a younger age group to give an more accurate account of the current state of constructionist toys. The answer(s) lay in the acquisition of two additional robots; Pico Crickets from the Playful Invention Company and the Roamer from Valient Inc. both of which prescribed to constructionist ideas and concepts.

Composition of the Group:

  • Ages ranged from 3 to 6.
  • All the children were boys
  • All of the parents involved professed a strong interest in their kids learning about technology.
  • Two of the parents had a job directly involved with technology, while others worked in the fields of medicine, psychology, and public relations.

Execution

The plan was simple.

Step I – Grease & Sugar
First, we served all of the parents and their children Pizza and Ice Cream providing us the opportunity to talk to the parents and children about their respective background while waiting for everyone to show up.

Step II – Constructionism 101
Next, we did a learning activity that did not involve computers or actual robots. The purpose of the activity was:

  1. Learn about the children’s ability to comprehend technical concepts
  2. Provide an overview for kids and their parents of how each of the toys worked
  3. Have fun!

We asked for volunteers from the children for both a robot and a set of computer programmers. One child volunteered as a robot, while the rest volunteered as computer programmers. We then brought out two poster boards, one empty one titled “Program” and a second one containing a set of square cut outs velcro-ed to the back labeled “Blocks.” We asked the children whether robots could “think” like we do. The children’s answers were mixed, but we explained that robots cannot think by themselves and that they need computer programmers to help them think by telling them what actions to take (i.e. creating a program). The children then participated in building a program by choosing action blocks from the “Blocks” poster-board and moving them to the “Program” poster-board. After the “computer programmers” were done creating their program we had our acting robot execute it through by stepping, jumping, growling, and barking as instructed by the program.

Programming 101

Programming 101

Step III – Breakout Time
After our brief course in robotics and programming we gave a very brief introduction to the different robots we had around us and then broke up, letting the children gravitate to the robots that interested them the most. We had the five following robot stations setup.

Station I – Lego Mindstorm

At the core of Lego Mindstorm is a programmable brick that has the capability of accepting input from 3 sensory devices and controlling up to three output devices (i.e. motors). While the brick has an interface for building programs directly on it, more typically users will use the Mindstorm programming environment to build a program and deploy it to the brick using either a USB cable or bluetooth connection.

Mindstorm

Target Age Group: 10+
Observations:

  • Providing a quick look at Mindstorm proved to be the most problematic for two reasons. First, if the robot was left idle it would shutdown at which point you have to re-establish the bluetooth connection to demo it. This turned out to be an inconvenient interrupt which required that we ask everyone to please wait while we re-established the bluetooth connection. Second, when using the visual programming environment for Mindstorms each action/block has a great deal of configurations, which were often difficult to see on a large screen and impossible to walk through with young children.
  • Parents and children were intrigued with the possibility of creating a humanoid robot as shown on the Mindstorm Box.
  • Parents found the lack of organization and hundreds of small pieces for Mindstorm to be daunting.
  • Surprisingly the children showed no interest in Lego Mindstorm once we broke up across the different stations.

Station II – Pico Crickets

Similar to Mindstorm, Pico Crickets leverages a Visual Programming interface, motors, input sensors, and legos. However, Pico Crickets has two distinct differences. First, it is target towards a younger age group of 8+. Secondly, Pico Crickets strives to work with the artistic capabilities and intuition of children rather than purely mechanics (i.e. gears and motors).

Pico Crickets

Target Age Group: 8+

Observations:

  • Out of all the robots, Pico Crickets held the most attention not only from children, but also parents (one parent actually built a 7 step program). Three children spent a substantive amount of time on Pico Crickets.
  • The organization & number pieces in the Pico construction kit was much less daunting than that of Mindstorm.
  • Children seemed to either want to play with the lego pieces or the programming environment (to create pretty programs) but did not seem to correlate the relationship between the two.
  • The children using the programming environment did so to in the same way that they use legos, they were snapping together virtual blocks to create diagrams not to execute them.
  • One parent remarked that they liked the toy but felt that it required to much hand holding for the children.

Station III – Scratch IDE

Scratch is a programming language developed by the Lifelong Kindergarten Group at the MIT Media Lab to help young people learn how to develop computer programs. The development of Scratch (and its name) was inspired by the scratching process that DJs use to create new sounds and music by rubbing old-style vinyl records back and forth on record turntables, creating new and distinctively different sound out of something that exists. Similarly scratch projects allow young developers to mix together graphics and sounds, using them in new and creative ways.

scratch2

Targeted Age Group: 8+

Observations:

  • There was one child that played with Scratch, but that was the only robot that he played with the entire time. While he seemed to enjoy making Scratch (the out-of-box virtual cat robot) do stuff, he particularly enjoyed the more personal aspects of scratch that enabled him to upload his own picture and record his own voice to use in a program. Note: however that uploading his own picture is still a complex process for which he needed help.
  • Parents did show interest in the fact that scratch was free.

Station IV – Valiant Roamer

The Roamer is a commercialized version of the physical dome shaped robot that Papert initially worked on at MIT while designing the logo language. While there is a visual programming environment for Roamer, similar to the other robots, there is no link between the visual programming environment and the physical robot. The programming environment, called Roamer World, is simply a simulation of the physical robot in a virtual world. The programming interface for the physical roamer is are the keys located on top of the robot.

Valient2

Valient

Targeted Age Group: 4+

Observations:

  • Both parents and children attempted to use the Roamer once but then quickly left for another toy once it did not do as they intended (which happened to always be on the first attempt).

Station V – Wacky Wigglers Building Set

Now here is a robot that you can find in your typical toy store. While the Wacky Wigglers Gears set would not be considered a constructionist kit due to its lack of an actual programming interface we still wanted to place it out there to see how children would respond to mechanical aspect to it. The objective of the Wacky Wigglers Building Set is to piece together a robot with a whole lot of gears which you can then control using a basic forward, backward, and turn motion remote control.

Gears

Target Age Group: 5+

Observations:

  • There was substantive interest in the Wacky Wigglers Building set. At least three children spent time successfully putting together parts of a robot. One child in particular committed himself all the way through until the robot was complete. Note: there was no adult involvement in constructing this robot.

Dispelling Myths (another observation)

Another interesting observation that we all made at the focus group was that several children came to the party with a preconceived notion of what a robot was and it didn’t fit with the ones that we had prepared for the children. Instead 3 of the children assumed that robots were human looking and dangerous.
Terminator
It is our hope that the focus group has given them a different notion of what a robot can be.

The Design Stage

In this phase of the project we brainstormed up three different possible designs to tackle the problem domain we were addressing. I won’t spend to much time on each of these because we we only chose one of them in the end.

Design I – IntelliBlocks

In brief the concept here was to implement a completely hardware based solution to alleviate the disconnect that children faced when interfacing between the computer and a physical bot. Rather than programming a robot using a computer, the program would actually become part of the environment/stage that the bot was running in (i.e. the program itself was physical). Below are some pics of some of the illustrations we put together for this design.IntelliBlocks1
IntelliBlocks2The first picture illustrates a lego board that a) implies the existence of a robotic train above it and b) contains an empty sequence block that can be used to program the train. We assume there are several actions that a train can perform and that children ages 3-8 would be aware of such as (go forward, backward, whistle, etc…). The second picture illustrates the use of blocks representing those actions to build a program that commands the train to loop around the train track until it senses that it is near a station at which point it blows its whistle and completes.

Design II – SoftBots

In this design, similar to above, an objective was to alleviate the disconnect that children faced when interfacing between the computer and a physical bot. However, rather than implement a completely hardware based solution in this design we proposed implementing a completely software based solution.

Softbots1

The picture above shows a hacked up illustration that is somewhat similar to Scratch, however our goals were to 1) improve upon the personalization capabilities by reducing the steps needed for children to record audio and take snapshots of themselves and 2) provide higher level abstractions than scratch by not treating all objects generically as a sprite, but rather for the environment to be aware of the capabilities of any given object on the stage and to know what capabilities to make available based on the combination of objects on the stage. Consider in Scratch if you had a Martian sprite and a gun sprite, one solution program the Martian to pick up the gun would be to tell the Martian sprite to move in the direction of the Gun until a color was detected and then to switch the “costumes” of the sprite to show it holding the gun. Rather we would prefer the programming environment knowing that there was a Martian bot and a Gun Bot and accordingly enabling the capability for the Martian to pick up the gun by making a high-level action “Pickup Gun” available when the Martian is selected.

Design III – Bot Commander

See Prototype

Prototyping Time

The Bot-Commander was a software/hardware based design that ended up being what we believed to be the most effective and feasible design solution that we could prototype within the given time frame (<2weeks) and with the available resources. Moreover, thanks to a presentation given by Andrew Powell around MERAPI, Mindstorm, and AIR at a recent AFFUG meeting we had heightened confidence that our goals could be achieved.

Jumping right to it, the programming environment (as an alternative to the IDE provided out of the box with Mindstorm) is shown below.

Bot-Commander

Bot-Commander

Note that the user has a set of actions on the left hand side that he or she can drag onto a canvas. There are actions for movement, sound, an sensors. The program above will wait until a sound (such as a clap) occurs then move the robot forward, to the right, in a circle, laugh, cry, and finally, play a tune.

Architecture Talk

Before considering usability we will start off with a high level view of the architecture of the prototype, which is reflected in the diagram below.

botcommander-arch
Bot-Commander Architecture

Fortunately, from an architectural perspective there was a great deal of functionality already available in the community that we were able to leverage in order to prototype Bot-Commander. Here is a brief summary of the various components that made up the Bot-Commander architecture.

  • Bot-Commander – This is the UI that was implemented by the Bots-For-Tots team to effectively replace the Mindstorm visual programming environment with an alternative that is targeting younger children (ages 3-8). The UI was implemented using Adobe’s Flex/Actionscript technology and is hosted within Adobe’s Integrated Runtime (i.e. AIR), providing the best of two worlds; the web and power of desktop computing. By leveraging AIR Bot-Commander can tie in more closely to the users desktop to interact with Merapi and LeJOS.
  • Merapi – Not only is Merapi an actual volcano on the actual island of Java, but it is (more importantly this team would argue) a bridge between Adobe AIR applications and Java. Merapi has been designed to run on a user's machine, along with an Adobe AIR application and provide a direct bridge between the Adobe AIR framework and Java, exposing the power and overall capabilities of the user's operating system, including 3rd party hardware devices.
  • Bot-Command Generator – Implemented as a Merapi message handler the Bot-Command Generator is responsible for interpreting a sequence of actions deployed from the Bot-Command UI, generating a Java program from those actions, and then compiling, linking, and uploading the compiled binary up to Alpha Rex (via LeJOS).
  • LeJOS – As and open source Java programming environment for the Lego Mindstorm NXT LeJOS was critical for us to get up and running a prototype. LeJOS allows Java developers to program Lego robots. LeJOS consists of:
    • Replacement firmware for the NXT that includes a Java Virtual Machine.
    • A library of Java classes (classes.jar) that implement the leJOS NXJ Application Programming Interface (API).
    • linker for linking user Java classes with classes.jar to form a binary file that can be uploaded and run on the NXT.
    • PC tools for flashing the firmware, uploading programs, debugging, and many other functions.
    • A PC API for writing PC programs that communicate with leJOS NXJ programs using Java streams over Bluetooth or USB, or using the LEGO Communications Protocol.
  • Tiny VM – An open source, Java based replacement firmware for the Lego Mindstorms RCX & NXT microcontrollers. TinyVM's footprint is about 10 Kb. The project was forked into LeJOS back in 2000, where the Tiny VM is now a component of a larger architecture for programming Mindstorm robots.
    </li

  • Alpha Rex – Known as Roby by one of the team member’s kids, Alpha Rex is the robotic hardware that children can now program using Bot-Commander. Mindstorm robots can take on many forms, but Alpha Rex was chosen for this project due to a) his humanoid form, which often invokes curiosity in both adults and children and b) his maximization of the use of sensors and motors.

There are few other things to note with respect to the architecture diagram above. First, outside of Alpha Rex, everything else is running as one application on the user’s desktop. Second, Merapi, Bot-Command Generator, and LeJOS are running in co-operative process that is hosting an instance of the Java VM. Communication between the Bot-Commander UI (running in AIR) and the Java components happens through passing serialized objects in the Action Message Format (AMF, a format for object remoting) over sockets. Third, communication from the Desktop to Alpha Rex happens over either a USB cable or Bluetooth. In both cases, LeJOS leverages open source projects to implement the communication.

Usability Talk

It was interesting to see during our initial focus group that one of the robots that cultivated the least amount of interest from children 3-8 was the Mindstorm robot. This is interesting because Mindstorm is a) supported by a well-known toy manufacturer (i.e. Lego Inc.) and b) the most popular of any robotics kit among teenagers and/or adults. However, its not surprising from the point of view that Mindstorm does not target children as young as 3-8.

So why you might ask did we decide to use Mindstorm as a basis for the prototype? The answer is simple; Mindstorm provides an extensible environment from which to build effective prototypes, extensible enough to the point of replacing its visual programming tool.

While we identified a good number of issues at our focus group with children ages 3-8 using robots and they’re respective programming environments, for purposes of the prototype we attempted to address only few of the more critical issues that we saw. The goals were to provide enough to a) complete phase IV of the project and b) provide an environment that kids can begin enjoying now.

Issues the prototype was addressing:

  • Hardware Abstraction – While children in our target age group tend to understand and able to identify objects such as trains, cars, dolls, and yes even a robot, none of the children that we interacted with in our focus group were familiar with more primitive objects such as gears, motors, and sensors (not to mention that many of those primitive objects are choking hazards;-). Having to deal with primitive hardware objects posed a significant barrier to a child’s success in accomplishing their desired end goal of programming the robot.
  • Connectivity – Both of the robot kits we had that had both a hardware and software component had connectivity issues. With Mindstorm in particular children were confused when the robot had automatically turned off and we had to explain to the children that the Bluetooth connection needed to be re-paired. The children moved onto the next robot, while the connection was fixed, but never made it back.
  • Layout – Each of the programming environments that the children used in attempting to program the robots had varying levels of complexity, with Mindstorm being the most complex. Children did not understand how the placement of the actions in the program meant different things (e.g. connecting actions made a sequence, while disconnected actions implied parallelism).
  • Software Abstraction – Mindstorm in particular had very primitive programming constructs (i.e. actions). If a robot has a claw the child might expect to have a “close claw” and “open claw” actions, however, in Mindstorm for example almost everything is controlled as motor A, B, or C.
  • Keyboard Usage – We found that children could play effectively with a visual programming environment such as scratch when the majority of user interactions they performed were done through the mouse. More complex interactions that involved typing with the keyboard acted as a barrier to accomplishing the primary tasks of creating a program.

The following sections will briefly cover what we did with the prototype to overcome these issues:

Hardware Abstraction

To overcome the issue of dealing with primitive hardware objects we are assuming that children are starting off with a complete robot. In the context of Mindstorm this means that the robot has already been built (bypassing a major step). While it would seem quite feasible to imagine that children might still piece together robots using less primitive objects such as a claw piece (which is more of an accessory/attachment for an existing robot rather than a building block), nothing such as that is currently available for Mindstorm and for purposes of the prototype we assume a complete & already accessorized robot.

We started off with:

Mindstorm Out-Of-Box

Mindstorm Out-Of-Box

And have ended up with:

Alpha Rex Finished

Alpha Rex Finished

Connectivity

We have for the most part eliminated the complexities of connectivity at this point, by not requiring a connection to be configured. Instead, once the user has decided to run their program we dynamically look for the Robot over either USB and/or Bluetooth and upload the resulting program. Preferably we would also have a means of showing the user when one or more robots is detected in the area by polling the Bluetooth connection every few seconds assigning them default user friendly names, however we have not added this feature as of yet.

Below is the Bot-Commander UI which shows the Run button that will be used to find the Mindstorm robot, upload the program (that would be designed on the right hand canvas), and then run it.

Layout

Here we attempted to make the layout as easy as possible. Of course implementing any type of diagramming tool in the relatively short amount of time we had for this phase of the project was challenging. At this point, users can simply drag actions from the left hand pane of the Bot-Commander and drop them onto the design canvas on the right hand side. The difference between our implementation and the visual diagramming environments of the other tools we looked at in the focus group is that sequence is assumed as you drop actions onto the right hand side. Users do not have to visually attempt to snap pieces together or draw edges between actions separately. Instead, they drag and drop, sequence is assumed, and the edges are drawn automatically to reflect the assumed sequence. A major constraint at this point, however is that we do not allow re-ordering without starting from the beginning.

Software Abstraction

Mirroring the hardware primitives, all of the visual programming environments we looked at in the focus group also had software primitives to deal with. Even Scratch, which did not force the use of hardware, but primarily dealt with virtual bots did so at a fairly primitive level (e.g. all “bots” are actually sprite/2d image objects with a fairly limited set of capabilities that are generalized across all sprites). Consider in Scratch if you had a Martian sprite and a gun sprite, to program the Martian to pick up the gun would require telling the sprite to move in the direction of the Gun until a color was detected and then to switch the “costumes” of the sprite to show it holding the gun. Rather we would prefer the programming environment knowing that there was a Martian Robot and that he/she had the capability to pick up a gun and reflect that by making a high-level action called “Pickup Gun” available.

For the purposes of this prototype we are limited to abstracting away the configuration required to do move operations in Mindstorm. Rather than having a single Move action (as is the case in Mindstorm) that requires the user to know what motor they are dealing with, how many revolutions to perform, and in what direction we are summing up this behavior in two separate actions; forward and backward. Bot-Commander assumes the number of revolutions to make the robot step in either direction and even the direction itself is assumed based on the type of robot built (i.e. Alpha Rex).

Keyboard Usage

Currently, everything that can be done in the Bot-Commander can be done via a Mouse. It is our intent to maintain that constraint on the design as much as possible.

Limitations

While we have been successfully in getting the base architecture implemented and addressed some usability concerns, there are a good number of relevant features that we were not able to get to at this point. In particular, we were not able to get to the personalization features which we found to be quite effective for children with both Scratch and Pico Crickets. In addition, we did not have any graphic artists on our project team to create effective images to represent the actions within Bot-Commander. Packaging up the configuration and making an installer for this app would take some extra effort as well since there are platform specific dependencies with respect to USB and bluetooth drivers.

Kudos to MERAPI and LEJOS

At this point I felt obligated to throw some Kudos at the two open sourced projects we used, both greatly accelerated the the rate at which we could work and did as advertised. Its not to often that you use software and it just works. Both Merapi and LEJOS did just that.

Finally!!

Well I was to lazy to do much editing, instead copying and pasting over sections. Just to sum up our work above we ended the project by performing some user & heuristic evaluations, providing us with feedback on the prototype. Unfortunately that aspect was rushed and while useful was not unbiased (i.e. my kids were the only ones to do user evals) That is where the work ended! However, I am hoping to translate this work over to an example within an investigation I am doing on Domain Specific Languages.

References

Here are some references for those interested.

Advertisements