I recently extended a project I hadn't written, adding a few more features to it. I worked collaboratively with 2 other people using the IDPM (Issue Driven project Management) process. The three of us divided our task into more or less equal parts, and went about adding the desired functionality to the system. The system is a power an energy monitoring platform, and the functionality we added provides commands to set baseline, goals, and monitoring of those goals in terms of power and energy consumption.
During the entire development process, I was lucky enough to never once have a source repository merge conflict with any of my partners' code. I guess we all knew how to say out of each other's way, using package names and classes and encapsulation and separation mechanisms to make the code modular and flexible.
But the thing that most simplified our task was the build system infrastructure included with the system. It took care of dependency downloading, compilation, deployment, testing, automated quality assurance, and more. I'm talking about the Ant build system, which along with Maven, is the most used in the Java community and works very well with Java and its tools. The project came with a complete and modular Ant build specification, constituting about 10 Ant files (which are XML files using the tags in the Ant namespace), so that we were able to get off the ground and running within seconds of downloading it.
We did have to meet a few times though to sort out external interfaces and dependency issues, like what modules depended on what modules, and therefore which modules needed to be completed first. At our meetings we quickly got down to specifics and used an eraser board to write down pseudo-code, interfaces, and object definitions. One of my partners even took a cell phone snapshot of the eraser board for future reference.
Possibly even as useful as the automated build system was the continuous integration process we used to develop the system enhancements, a combination of the Jenkins build system and the google Subversion project hosting. Had we ever run into compilation or testing problems, we could have easily known who was responsible and could have easily rolled back the system to a previous healthy state if the problem could not be fixed.
The combination of automatic building, continuous integration, and automated quality assurance kept or code honest and its format uniform. One thing I found about working with quality assurance tools like Checkstyle, PMD and Findbugs, is that they frequently teach you about writing better, more effective and more correct, code. Many of the tips and suggestions included in books like "Effective Java" by Joshua Bloch will show up as warnings in PMD and Findbugs. It's great to have them there to remind you of when you're doing something unsound.
The whole process went so well, that I don't think we had a single failed build on the Jenkins server. Of course, I know that this implies that each one of us must have been running their Ant verify script locally every time before checking new code or configuration changes into the system. I know, that I can rarely make any change of more than a few lines to the system without Checkstyle, PMD, Findugs or Junit complaining. But this is good. I'd rather deal with one issue at a time than with dozens like I used to before I used unit testing on my code. And I'd rather QA tools force me to keep the code looking good and doing things the right way, as I code, instead of having to go back and do cleanup for hours on end. I used to hate that, and would often cut corners or delay until it was too late.
The more I think about it, these tools and this process are analogues of the things we do in our lives to make managing them more efficient and positive. Get some work done every day, don't push off unpleasant things to the point they pile up into a huge chore, and monitor your progress, so you don't experience any nasty surprises. It's common sense, and it's about time, the same stress reducing techniques used in real life, become mainstream in software development.
Wednesday, December 14, 2011
Friday, December 2, 2011
YATR
Yet another technical review....
I just recently conducted a technical review of a system, which to protect the guilty shall remain anonymous. It's an application that is packaged in a zip file, and in it are all the ant file trappings that characterize a structured java development process. xml files for the ivy dependency manager, for junit testing, for jenkins continuous integration testing, and for automated QA assurance with three tools by the name of PMD, Findbugs, and Checkstyle. Here are my findings.
Well, first I visited the project's home at the google code site, and browsed to the source tab, at which point Google helpfully reminded me of the svn command string to check out the code to my machine, which I did. After that I cd'd into the project directory and ran ant -verify on the project, and saw that the code compiled, the unit tests were passing, and the automated QA assurance tests were passing to. That was a sign to me that I wasn't looking at vapor-ware, which is always a plus.
Then I fired up the system, by going to a Main.java file that had a public static void main, because I just a strange feeling that that was the app's starting point. I was right. Application started right up, and I get a prompt offering me the choice of 4 nifty commands for finding out how much energy is consumed in some buildings in a campus on an overinhabited tropical island. Commands seem to work fine, but I'm a prankster, and I purposefully direct my mischief at this poor innocent application by entering:
energy-since Mokihana-A 2011-11-
See that last string awas supposed to be a complete date in the yyyy-mm-dd format. Well, the app didn't like that. It answered:
2011-11- is not a valid date, or date format yyyy-MM-dd.
Reason: java.text.ParseException: Unparseable date: "2011-11-"
That's offensive. Just because I'm a prankster doesn't mean I want to see the ugly Java underbelly of the application, which I was idealizing as a nice Ruby app, and now my fantasy is shattered and I can't get Java out of my mind. Plus, I suddenly got the urge to go pick up the dragon book and write a parser, when I need to be chugging along in my handsomely compensated technical review job.
Ok, when all is said and written, it's got to be admitted the app does what it says it would do.
Now on to another pressing matter: is it amenable to easy use and installation. Well, yeah again. It's google code site is elegantly furnished with all the delicate and succulent tidbits you'd ever need to know if you wanted to install the app. It's main wiki page even has a reference to the project it depends on, though the authors forgot to put a download link directly on the main page. No matter, though one extra click gets even the laziest of mortals to the complete download page, which in addition to sporting the app distribution file in zip format, also offers a complimentary style format file suitable for use with the Eclipse plugin, and meant to have the Eclipse IDE (sold separately) do much of the styling for us, that the checkstyle plugin will yell at us if we don't do the right way.
The google site for the project even has a wiki entry for the coding standards used in the project, so that these standards might live on in the coding after-life.
Ok, score another one for the gipper. The last thing every black belt technical reviewer checks for: can the system extended and/or modified?
Well, looking at the development wiki, I can see that it's a jiffee. The authors of the project have graciously enumerated all the development facets of the application, providing sample commands for building, running a code coverage suite, testing, generating documentation from the javadoc comments, deploying, and instructions for adding commands to the actual application. The wiki developer page also includes brief sections on the issue driven project management collaboration process used to develop the application and a link to the Jenkins continuous integration server that the authors used to keep the system healthy and test it in neutral territory.
I ran the JavaDoc ant task and perused the generated JavaDoc. I was impressed with the quality of the comments and also with the use of special JavaDoc features like code links.
After running the Jacoco test coverage tool, Was able to see that the authors achieve a very high code coverage rate, with most of the untested branches being in the test code itself.
I also surveyed the test code, and while it is extensive, I though the design could use a little refactoring. In particular, the authors chose to put all the tests that check that the different commands run successfully in one Test case. That file is already a little bloated and would necessarily become more bloated as new commands are added to the system. If the commands merit their own classes, then they also merit their own test case classes and files.
I checked the project's issue page, and saw that the authors had divided up the tasks very neatly into feature or issue based chunks. They even created issues to explain time off from project work and multi-day absences from the project.
Finally, I started looking at the source code, which I could tell from successfully running the ant verify target, was at least at a certain level of quality. Right off I saw that the code included several custom exception classes and that's usually a good sign since those custom exceptions can carry project specific information and make extending the application easier.
The code base is also pretty modular and I found no overly complex parts or overly nested control structures. All in all, this seems to be a well-managed and well-designed project. It does what it's supposed to do, it's easy to use and install, and it lends itself to being extended and modified.
I just recently conducted a technical review of a system, which to protect the guilty shall remain anonymous. It's an application that is packaged in a zip file, and in it are all the ant file trappings that characterize a structured java development process. xml files for the ivy dependency manager, for junit testing, for jenkins continuous integration testing, and for automated QA assurance with three tools by the name of PMD, Findbugs, and Checkstyle. Here are my findings.
Well, first I visited the project's home at the google code site, and browsed to the source tab, at which point Google helpfully reminded me of the svn command string to check out the code to my machine, which I did. After that I cd'd into the project directory and ran ant -verify on the project, and saw that the code compiled, the unit tests were passing, and the automated QA assurance tests were passing to. That was a sign to me that I wasn't looking at vapor-ware, which is always a plus.
Then I fired up the system, by going to a Main.java file that had a public static void main, because I just a strange feeling that that was the app's starting point. I was right. Application started right up, and I get a prompt offering me the choice of 4 nifty commands for finding out how much energy is consumed in some buildings in a campus on an overinhabited tropical island. Commands seem to work fine, but I'm a prankster, and I purposefully direct my mischief at this poor innocent application by entering:
energy-since Mokihana-A 2011-11-
See that last string awas supposed to be a complete date in the yyyy-mm-dd format. Well, the app didn't like that. It answered:
2011-11- is not a valid date, or date format yyyy-MM-dd.
Reason: java.text.ParseException: Unparseable date: "2011-11-"
That's offensive. Just because I'm a prankster doesn't mean I want to see the ugly Java underbelly of the application, which I was idealizing as a nice Ruby app, and now my fantasy is shattered and I can't get Java out of my mind. Plus, I suddenly got the urge to go pick up the dragon book and write a parser, when I need to be chugging along in my handsomely compensated technical review job.
Ok, when all is said and written, it's got to be admitted the app does what it says it would do.
Now on to another pressing matter: is it amenable to easy use and installation. Well, yeah again. It's google code site is elegantly furnished with all the delicate and succulent tidbits you'd ever need to know if you wanted to install the app. It's main wiki page even has a reference to the project it depends on, though the authors forgot to put a download link directly on the main page. No matter, though one extra click gets even the laziest of mortals to the complete download page, which in addition to sporting the app distribution file in zip format, also offers a complimentary style format file suitable for use with the Eclipse plugin, and meant to have the Eclipse IDE (sold separately) do much of the styling for us, that the checkstyle plugin will yell at us if we don't do the right way.
The google site for the project even has a wiki entry for the coding standards used in the project, so that these standards might live on in the coding after-life.
Ok, score another one for the gipper. The last thing every black belt technical reviewer checks for: can the system extended and/or modified?
Well, looking at the development wiki, I can see that it's a jiffee. The authors of the project have graciously enumerated all the development facets of the application, providing sample commands for building, running a code coverage suite, testing, generating documentation from the javadoc comments, deploying, and instructions for adding commands to the actual application. The wiki developer page also includes brief sections on the issue driven project management collaboration process used to develop the application and a link to the Jenkins continuous integration server that the authors used to keep the system healthy and test it in neutral territory.
I ran the JavaDoc ant task and perused the generated JavaDoc. I was impressed with the quality of the comments and also with the use of special JavaDoc features like code links.
After running the Jacoco test coverage tool, Was able to see that the authors achieve a very high code coverage rate, with most of the untested branches being in the test code itself.
I also surveyed the test code, and while it is extensive, I though the design could use a little refactoring. In particular, the authors chose to put all the tests that check that the different commands run successfully in one Test case. That file is already a little bloated and would necessarily become more bloated as new commands are added to the system. If the commands merit their own classes, then they also merit their own test case classes and files.
I checked the project's issue page, and saw that the authors had divided up the tasks very neatly into feature or issue based chunks. They even created issues to explain time off from project work and multi-day absences from the project.
Finally, I started looking at the source code, which I could tell from successfully running the ant verify target, was at least at a certain level of quality. Right off I saw that the code included several custom exception classes and that's usually a good sign since those custom exceptions can carry project specific information and make extending the application easier.
The code base is also pretty modular and I found no overly complex parts or overly nested control structures. All in all, this seems to be a well-managed and well-designed project. It does what it's supposed to do, it's easy to use and install, and it lends itself to being extended and modified.
Monday, November 28, 2011
IDPM
Somewhere between the laborious and overcomplicated world of RUP and UML, and the seat-of-the-pants just-hack-the-code approach there has to lie a sweet spot in project management methodology. Well, Issue Driven Project Management is one of the contenders for moderation, somewhere between the stifling bureaucracy of RUP and the kiddie-script amateurism of making it up as you go. It's a technique whereby you orient your project around solving issues, which can be defects or feature additions, each of which should take no more than one or two work days to complete. Each issue is created, then either assigned to an individual or taken up voluntarily by an individual, and then is considered accepted. From then on, its status can be changed to fixed, invalid, duplicate, etc. In other words, each issue has its own life-cycle. The life-cycle stages of an issue can even be tied to the version control system by using appropriate tags on the commit messages. The issue list is best views as a matrix where the rows are the project contributors and the columns, the issues, and various project management tools can be massages to provide this perspective, for example the google code hosting service interface.
My first experience in IDPM took place last week on two week long project with 3 total project members. Between the continuous integration process we were using, a Jenkins build getting its code updates from the google code hosting site, and the IDPM, we were able to communicate fairly effectively with each other. Just by glancing that the IDPM matrix, I could see whether there were any tasks assigned to me or needing an assignee, and what the progress of my teammates was toward completing the project, and their current accepted issues.
IDPM's break-up of a project into small tasks is a nice way to divide a project into chunks so you can always see the end of the tunnel. It also avoids the tendency to optimize prematurely, make the architecture more general than it need to be for the current task at hand, or add luxury features even before the basic application is working.
In a sense, IDPM is like test driven development at a larger scale: first pick a task, then get it working, then pick another, and so on. It's all about incrementing functionality, not incrementing architecture or the size of the code base. This is a very pragmatic approach and leads to less stress than the attrition-based coding model, where you just code until there's nothing left to code. That model can result in lost productivity, where the major architecture undergoes several changes and redo's just because the developer doesn't yet have any features working, and thus doesn't have a concrete idea of what the architecture needs to provide, just an inferred one based on a design that hasn't been translated to code.
In essence, IDMP is an organic way to grow software, and every evolutionary change in the software is driven by some meed to satisfy a feature. This works well, and may not be the only good technique between the extremes of RUP and just-coding, but shows how well a moderate approach performs even in software development.
My first experience in IDPM took place last week on two week long project with 3 total project members. Between the continuous integration process we were using, a Jenkins build getting its code updates from the google code hosting site, and the IDPM, we were able to communicate fairly effectively with each other. Just by glancing that the IDPM matrix, I could see whether there were any tasks assigned to me or needing an assignee, and what the progress of my teammates was toward completing the project, and their current accepted issues.
IDPM's break-up of a project into small tasks is a nice way to divide a project into chunks so you can always see the end of the tunnel. It also avoids the tendency to optimize prematurely, make the architecture more general than it need to be for the current task at hand, or add luxury features even before the basic application is working.
In a sense, IDPM is like test driven development at a larger scale: first pick a task, then get it working, then pick another, and so on. It's all about incrementing functionality, not incrementing architecture or the size of the code base. This is a very pragmatic approach and leads to less stress than the attrition-based coding model, where you just code until there's nothing left to code. That model can result in lost productivity, where the major architecture undergoes several changes and redo's just because the developer doesn't yet have any features working, and thus doesn't have a concrete idea of what the architecture needs to provide, just an inferred one based on a design that hasn't been translated to code.
In essence, IDMP is an organic way to grow software, and every evolutionary change in the software is driven by some meed to satisfy a feature. This works well, and may not be the only good technique between the extremes of RUP and just-coding, but shows how well a moderate approach performs even in software development.
Tuesday, November 8, 2011
Learning the ropes with WattDepot
Here at UH, there's a url at which you can just about see how much energy each floor of several for the university dorms is consuming. As part of a federally funded study, the university has outfitted the energy meters and submeters in several of the dorms with power, voltage, and energy consumption sensors, that take readings at the sub-minute level and relay them back to a central server. By visiting a url at hat central server, you can pretty much see what energy consumption looks like for college dorm students.
The university's computer science department has designed and implemented a framework for energy data sharing, uniform access, storage, dissemination, analysis, and visualization, called WattDepot. The API is hosted on the Google Projects site, and once you download it, you can have some command line interactions with the server working in just a few minutes. WattDepot provide a nice API designed to hide the lower layer protocols, that makes interacting with the server much like interacting with a file on your local machine.
First I implemented a class called SourceListing that just listed the sensors associated with a particular WattDepot server, which is known as the "owner" of those sensors. This was pretty simple and pretty much spelled out on the WattDepot documentation examples on the Google project page. THis took me about 15 minutes to write the code and another 15 minutes to set up my Eclipse IDE to associate the framework source and javadocs with the library jar file, for easier editing, code completion, and javadoc perusal.
Then I wrote a SourceLatency class that sorted all the sources at the same url by latency, and this invloved using the code I already had and writing an anonymous inner class to do the comparison of latencies by implementing the Comparable interface. This took me about 20 minutes.
After that, I wrote a SourceHierarchy class that uses the subsources attribute of each source, spelling out which sources are its "children", to construct a set of trees of sources, and then print out those trees recursively using indentation to visualize the hierarchy much like files are shown in a file system browser, or on the command line using the Unix "tree" command. This took me about a half hour, since it involved formulating a game plan to build the trees in the most economic way. What I did was to simply find out what the roots were, by going through all the sources and eliminating all children as root candidates. Then I printed the trees at those roots recursively, so there's never an explicit actual tree data structure in RAM, but I'm able to print out the tree structure nonetheless.
After that, I was energized so to say, and I proceeded to writing a class called EnergyYesterday. Here started the pain. I had to find out how to have Java give me yesterday's date, so as not to hard code it, and I had to find out how to translate between various date formats: XMLGregorianCalendar which WattDepot uses, java.util.Date and java.util.Calendar. Well, let's just say I found some code online that braves this tedious translation, I put in in a class, duly attributing it of course, and doing date calculations should be much easier from here going forward. This class took me maybe an hour and a half to write, much of it having been spent wrestling with dates and date formatting, cleaning up useful date conversion code I found on the web, and making utility classes out of it. THis task would have been garder had it not been for a simple WattDepot API function called getEnergyConsumed that takes two timestamps and provides the total energy consumed between them, so no adding or aggregating was necessary in my part. I just had to loop through all the sources at
After that, I did some energy analysis in a class called HighestRecordedPowerYesterday, which uses an API call by the name of getPowerConsumed, but here I had to loop through each sensor's data points between the starting and ending timestamp and keep track of the maximum power consumed and its related timestamp. This took me about 45 minutes, and I borrowed much of the code from the WattDepot wiki documentation on the Google project site.
Finally, in a class called MondayAverageEnergy, I wrote code to average the energy consumed at each sensor for this past Monday and the one before it. Here I hard codes the dates, and the code is a minor extension of EnergyYesterday. This took me about a half hour.
Now that I accomplished these tasks, I feel like I have a decent grip on the WattDepot client API, and the workflow for getting data from the WattDepot servers, but I have a feeling there's much more to the WattDepot framework, and I'll definitely be wring more about that in the next few weeks.
The university's computer science department has designed and implemented a framework for energy data sharing, uniform access, storage, dissemination, analysis, and visualization, called WattDepot. The API is hosted on the Google Projects site, and once you download it, you can have some command line interactions with the server working in just a few minutes. WattDepot provide a nice API designed to hide the lower layer protocols, that makes interacting with the server much like interacting with a file on your local machine.
First I implemented a class called SourceListing that just listed the sensors associated with a particular WattDepot server, which is known as the "owner" of those sensors. This was pretty simple and pretty much spelled out on the WattDepot documentation examples on the Google project page. THis took me about 15 minutes to write the code and another 15 minutes to set up my Eclipse IDE to associate the framework source and javadocs with the library jar file, for easier editing, code completion, and javadoc perusal.
Then I wrote a SourceLatency class that sorted all the sources at the same url by latency, and this invloved using the code I already had and writing an anonymous inner class to do the comparison of latencies by implementing the Comparable interface. This took me about 20 minutes.
After that, I wrote a SourceHierarchy class that uses the subsources attribute of each source, spelling out which sources are its "children", to construct a set of trees of sources, and then print out those trees recursively using indentation to visualize the hierarchy much like files are shown in a file system browser, or on the command line using the Unix "tree" command. This took me about a half hour, since it involved formulating a game plan to build the trees in the most economic way. What I did was to simply find out what the roots were, by going through all the sources and eliminating all children as root candidates. Then I printed the trees at those roots recursively, so there's never an explicit actual tree data structure in RAM, but I'm able to print out the tree structure nonetheless.
After that, I was energized so to say, and I proceeded to writing a class called EnergyYesterday. Here started the pain. I had to find out how to have Java give me yesterday's date, so as not to hard code it, and I had to find out how to translate between various date formats: XMLGregorianCalendar which WattDepot uses, java.util.Date and java.util.Calendar. Well, let's just say I found some code online that braves this tedious translation, I put in in a class, duly attributing it of course, and doing date calculations should be much easier from here going forward. This class took me maybe an hour and a half to write, much of it having been spent wrestling with dates and date formatting, cleaning up useful date conversion code I found on the web, and making utility classes out of it. THis task would have been garder had it not been for a simple WattDepot API function called getEnergyConsumed that takes two timestamps and provides the total energy consumed between them, so no adding or aggregating was necessary in my part. I just had to loop through all the sources at
After that, I did some energy analysis in a class called HighestRecordedPowerYesterday, which uses an API call by the name of getPowerConsumed, but here I had to loop through each sensor's data points between the starting and ending timestamp and keep track of the maximum power consumed and its related timestamp. This took me about 45 minutes, and I borrowed much of the code from the WattDepot wiki documentation on the Google project site.
Finally, in a class called MondayAverageEnergy, I wrote code to average the energy consumed at each sensor for this past Monday and the one before it. Here I hard codes the dates, and the code is a minor extension of EnergyYesterday. This took me about a half hour.
Now that I accomplished these tasks, I feel like I have a decent grip on the WattDepot client API, and the workflow for getting data from the WattDepot servers, but I have a feeling there's much more to the WattDepot framework, and I'll definitely be wring more about that in the next few weeks.
Tuesday, November 1, 2011
Energy in Hawaii
Hawaii presents a set of of unique challenges and on the other hand fertile opportunities in the areas of green energy production and energy conservation. Because Hawaii is isolated, it is not part of a supergrid like the contiguous US states are, and therefore has to import its energy from further away, and is more susceptible to temporary supply/demand imbalances. On the other hand, because of its unique location, year-round sunny climate, and large number of shoreline miles, Hawaii is an ideal candidate for the early success of wind, sea, and sun, energy solutions.
Some of these solutions aren't yet cost-effective on the mainland due to the easy exploitation of cheap fossil fuels or the technological and infrastructure gap between fossil fuel production and green energy production. However, they are actually already cost-effective in Hawaii, despite the same technological gap, because Hawaii pays a surcharge on its electrical energy costs, due to transportation and isolation, that makes electrical energy 2-3 times more expensive than on the mainland. Unlike the mainland, Hawaii derives most of its electrical energy from oil, which is the most expensive fossil fuel.
Because of this unique economic situation, green energy is being pursued in Hawaii in a more aggressive way than in most mainland states, even by Republican administrations such as that of former governor Linda Lingle, who signed the Hawaii Clean Energy initiative, thus going against the grain of a Republican party that is largely skeptical of global warming, even to the point that many Republicans view global warming as a liberal scheme to stifle capitalism.
However, green energy production and use, does pose a few challenges. One of them is that green energy is typically intermittent. For example, a solar panel doesn't produce energy at night, and a windmill doesn't produce energy when there is no wind. So, to make the best use of these intermittent and time-varying energy sources, we need to integrate them within our current energy grid in a cohesive and efficient manner.
This means that two-way communication *and* control, need to take place between the energy consumer's home and the the electric power plant, and between the devices in the consumer's home. This would allow, for example, an electrical plant facing a demand spike to temporarily shut down the air conditioning system of some of their clients homes alternately for a few minutes to allow the smoothing out of the spike. It would also allow a home's air conditioning system to use more energy when that home's solar panels are producing more energy, thus changing the temperature activation set-point of the air-conditioner, to align itself with the temporarily greater energy being produced by the solar panels.
Of course, all this information communication and control, requires a a decent amount of hardware and even more software, as we strive to make the devices smarter and as we get new ideas for how to program and reprogram them. So, intelligent and robust software has a definite role to play in a green energy infrastructure. Beyond that, it may have an even bigger role to play in green energy research and calibration, as we need a way to visualize information in a decentralized manner, so that information from many places can be aggregated and viewed on one terminal. Even when deploying proven green energy solutions, an individual household will still want to adjust the parameters of the programs driving their energy saving devices, to optimize their individual household energy savings and target them at their specific needs, and this too requires software for input, validation, and communication.
So, it's nice to know that software is going to play an intricate role in the evolution of one of Hawaii's most exciting technology and research areas, and that getting the software right will save us all time, energy, and money.
Some of these solutions aren't yet cost-effective on the mainland due to the easy exploitation of cheap fossil fuels or the technological and infrastructure gap between fossil fuel production and green energy production. However, they are actually already cost-effective in Hawaii, despite the same technological gap, because Hawaii pays a surcharge on its electrical energy costs, due to transportation and isolation, that makes electrical energy 2-3 times more expensive than on the mainland. Unlike the mainland, Hawaii derives most of its electrical energy from oil, which is the most expensive fossil fuel.
Because of this unique economic situation, green energy is being pursued in Hawaii in a more aggressive way than in most mainland states, even by Republican administrations such as that of former governor Linda Lingle, who signed the Hawaii Clean Energy initiative, thus going against the grain of a Republican party that is largely skeptical of global warming, even to the point that many Republicans view global warming as a liberal scheme to stifle capitalism.
However, green energy production and use, does pose a few challenges. One of them is that green energy is typically intermittent. For example, a solar panel doesn't produce energy at night, and a windmill doesn't produce energy when there is no wind. So, to make the best use of these intermittent and time-varying energy sources, we need to integrate them within our current energy grid in a cohesive and efficient manner.
This means that two-way communication *and* control, need to take place between the energy consumer's home and the the electric power plant, and between the devices in the consumer's home. This would allow, for example, an electrical plant facing a demand spike to temporarily shut down the air conditioning system of some of their clients homes alternately for a few minutes to allow the smoothing out of the spike. It would also allow a home's air conditioning system to use more energy when that home's solar panels are producing more energy, thus changing the temperature activation set-point of the air-conditioner, to align itself with the temporarily greater energy being produced by the solar panels.
Of course, all this information communication and control, requires a a decent amount of hardware and even more software, as we strive to make the devices smarter and as we get new ideas for how to program and reprogram them. So, intelligent and robust software has a definite role to play in a green energy infrastructure. Beyond that, it may have an even bigger role to play in green energy research and calibration, as we need a way to visualize information in a decentralized manner, so that information from many places can be aggregated and viewed on one terminal. Even when deploying proven green energy solutions, an individual household will still want to adjust the parameters of the programs driving their energy saving devices, to optimize their individual household energy savings and target them at their specific needs, and this too requires software for input, validation, and communication.
So, it's nice to know that software is going to play an intricate role in the evolution of one of Hawaii's most exciting technology and research areas, and that getting the software right will save us all time, energy, and money.
Tuesday, October 25, 2011
5 questions about Java
1)What is erasure?
It refers to the fact that information of parameter type instantiation of generic Java classes does not exist at run time, but only at compile time.
2) What three properties should be satisfied by objects with respect to the .equals() method?
Reflexivity, symmetry, and transitivity.
3) How does test-driven development affect interface and API design?
Since the tests are written before their respective testee modules, the module interface and API are first formulated in writing the tests for those modules. Therefore, the interfaces and API"s are designed for usability and testability from the outset, and take into account the perspective from the outside of the class, as opposed to just the internal implementation oriented perspective.
4)Does inheritance hold with respect to the parameterized type in instantiations of generic classes in Java?
No, it only holds with respect to the container type. So, for example, List<Object> is not a superclass of List<String>, but Collection<String> is a superclass of List<String>, with the consequent implications for assignability.
5)What are the three levels of granularity for which we can disable PMD error testing?
Annotation level(classes or methods annotated with @SuppressWarnings), and line level (lines followed by //NOPMD).
For more info see: http://pmd.sourceforge.net/suppressing.html
It refers to the fact that information of parameter type instantiation of generic Java classes does not exist at run time, but only at compile time.
2) What three properties should be satisfied by objects with respect to the .equals() method?
Reflexivity, symmetry, and transitivity.
3) How does test-driven development affect interface and API design?
Since the tests are written before their respective testee modules, the module interface and API are first formulated in writing the tests for those modules. Therefore, the interfaces and API"s are designed for usability and testability from the outset, and take into account the perspective from the outside of the class, as opposed to just the internal implementation oriented perspective.
4)Does inheritance hold with respect to the parameterized type in instantiations of generic classes in Java?
No, it only holds with respect to the container type. So, for example, List<Object> is not a superclass of List<String>, but Collection<String> is a superclass of List<String>, with the consequent implications for assignability.
5)What are the three levels of granularity for which we can disable PMD error testing?
Annotation level(classes or methods annotated with @SuppressWarnings), and line level (lines followed by //NOPMD).
For more info see: http://pmd.sourceforge.net/suppressing.html
Friday, October 21, 2011
Adventures in Google Project Hosting
I set up a google project hosting page for my robocode Diablo robot, and it was a cinch. Like any helpful IDE, the project hosting website steps you through the process, by providing among other things, repository connection parameter and sample commands.
First things first. Google Project Hosting (which I'll call GPH ) uses the Subversion version and configuration control system. I happened to already be somewhat fluent in Subversion (SVN) and whatever rust I had wasn't much of a problem since many of the commands for SVN and Git (which I've been using at the expense of SVN for the past year), are identical.
The toughest part of getting my project hosted was just making sure I didn't create that extra directory at the top level that I end up making so often by mistake whether it's via creating a project in an IDE or importing one via version control. In this case, it just comes down to making sure to dump your project files directory into the repository's trunk directory. A little birdie told me about a neat trick to get this right: just checkout an empty skeleton project onto your local machine and then put the initial project contents into the folder created by SVN. After that, just commit the whole thing and the files will be back up. Walla: now you have avoided the extra directory problem.
The GPH site uses a specialized markup language for user documentation authoring, that is more leightweight and more easy on the eyes than html, and is similar to other markup languages in the market. Once you learn a few quirks, these languages have pretty low ceremony and pretty good utility for getting all the headers and paragraphs to look right.
First things first. Google Project Hosting (which I'll call GPH ) uses the Subversion version and configuration control system. I happened to already be somewhat fluent in Subversion (SVN) and whatever rust I had wasn't much of a problem since many of the commands for SVN and Git (which I've been using at the expense of SVN for the past year), are identical.
The toughest part of getting my project hosted was just making sure I didn't create that extra directory at the top level that I end up making so often by mistake whether it's via creating a project in an IDE or importing one via version control. In this case, it just comes down to making sure to dump your project files directory into the repository's trunk directory. A little birdie told me about a neat trick to get this right: just checkout an empty skeleton project onto your local machine and then put the initial project contents into the folder created by SVN. After that, just commit the whole thing and the files will be back up. Walla: now you have avoided the extra directory problem.
The GPH site uses a specialized markup language for user documentation authoring, that is more leightweight and more easy on the eyes than html, and is similar to other markup languages in the market. Once you learn a few quirks, these languages have pretty low ceremony and pretty good utility for getting all the headers and paragraphs to look right.
Thursday, October 20, 2011
Diablo in details
It's been a few weeks now that I've been tinkering with Robocode. Like I mentioned in a previous entry, there's quite a bit of stuff to grok before you can start sporting a graceful robot with the elegant movements of a ballerina, and that doesn't get stuck the first time it hits a wall, or get confused by the geometrical subtleties of that most difficult of entities: the dreaded corner.
In the meantime I've been brushing up on all the attendant geometry and trigonometry issues that arise when trying to navigate and turn intelligently. I've been frequently referring back to a helpful series of tutorials and sample code at at http://mark.random-article.com/weber/java/robocode/lesson2.html
The one downside is that most of the examples in these tutorials assume you are extending a more complex robot class called AdvancedRobot. I resisted the urge to do that, and stuck with extending the more basic Robot, since I try to avoid premature optimization and also try to follow the KISS principle. The difference between Robt and AdvancedRobot is basically that Robot actions are blocking and AdvancedRobot actions aren't: the Robocode platform allows the non-blocking actions of an AdvancedRobot to be executed simultaneously, while the regular Robot can't walk and chew gum at the same time, but can fake it if he alternates the two fast enough.
One of the first things I did was shamelessly "borrow" two real real nifty and convenient classes from these tutorials, called EnemyRobot and AdvancedEnemyRobot. The former accomplishes the basic function of taking a snapshot of the robot that was just scanned and putting the information from that snapshot in a convenient object. The latter adds basic target prediction by linear two point extrapolation. Not terribly impressive, but a good jumping-off point for moving on to the higher level tactics and strategy.
Speaking of good old strategery, mine is basically a variation of strafing. Strafing means squaring off against your opponent, in other words pointing your tank perpendicular to the imaginary line joining you and your opponent. If you just keep going like this and in small enough increments, you end up "circling" your opponent (in reality you are traversing a bunch of small line segments that look like a curve but are just a series of line segments).
The two obvious variations on strafing are spiraling, or incrementally getting closer to the opponent like you are a satellite crashing into a planet, and veering away in an inverse spiral. The variation I devised, is simply to alternate between coming closer and going farther. This seems like a logical choice since I didn't want to design my robot for close quarters combat, which seems like a potshot anyway, and neither did I want to design my robot to run away like a bullied robot. The alternating motion between closing in and and veering out, also has the nice side-effect of making the robot's movements harder to predict and therefore making the robot harder to target.
I also added some probabilistic firing to make my robot stingy when dealing with a distant target. After all, in robocode, the energy expanded in shooting a bullet comes right out of the robot's life reserve, and if necessary, I want my robot to be able to play it cool and wait for the enemy to have no more gas in the tank, so to speak.
Finally, I added some wall avoidance code adopted from equivalent code in AdvancedRobot modules, and though it doesn't perform as well as its prototype, I didn't expect it to, since executing one action at a time does impose some limitations.
Presto, facto, finito. My robot was easily beating sample robots with intimidating names such as Walls, Ramfire, Fire, Corners, Tracker, and SittingDuck. Actually SittingDuck only sounds scary, in reality all it does it sit there like some slow bird. My robot comes up the victor most of the time against two other robots, called Spinbot and Crazy, but not always. But hey, my robot is rational and I didn't design it to face crazy people or their crazy robots. Spinbot, I should be able to beat, and I'm tweaking my robot intermittently, between taking breaks writing this blog entry. What is does well is to mave around fast in a circle, and I haven't yet added code to my robot to defeat that exact behavior.
At this point, I should mention that I did not write most of my tests until after I wrote most of my code and let the robot loose on the battlefield, and as usual, I ended up regretting that. Regardless, I did eventually add a few tests, of the unit, behavioral, and acceptance types.
I test that line between robot and enemy and one time instant is perpendicular to line between robot and itself at that time instant and at the time instant two ticks in the future. This is because it takes a robot in Robocode two ticks to react: one to get the information and one to take action.
I also test that the robot never stops (that it never occupies the same spot in N consecutive time instants, where I test the case of n=5).
Perhaps it's trivial, but I also test that the robot fires at lest 5 shots in each battle.
A useful sanity-check type test is to test that the robot detects the enemy at least once for every 360 degree turn of the radar. These are the types of checks that are useful, if nothing else, than for conforming intuition and understanding of the robocode rules and event model.
Finally, I test that the robot does not stop permanently after hitting a wall.
The most diffucult thing about testing robocode robot behavior is the lack of a one-time-click simulator. It would be useful to have a robocode module that allows the visualization of a robot at one tick and at the next one or at the nest n time ticks, to see the effects of different actions.
As with any simulation framework, there's an inherent inefficiency in testing behavior since the framework is not designed to show snapshots but to show the evolution of a battle in time.
The most important lesson I came away with from my robocode coding and testing, is that especially in a simulation environment, where environment variables can influence the behavior of the programmed module, it pays to make small changes incremetally, and test them as you go. Isolating specific behaviors can allow the testing of them independently, and to do this, it pays to have a modular design that allows the switching on and off of features, if only for testing purposes. Good thing I was using version control and that allowed me to have the piece of mind of knowing I could always revert back to a reasonably efficient robot if I got ahead of myself and ended up crippling my robot with too much complexity.
In the meantime I've been brushing up on all the attendant geometry and trigonometry issues that arise when trying to navigate and turn intelligently. I've been frequently referring back to a helpful series of tutorials and sample code at at http://mark.random-article.com/weber/java/robocode/lesson2.html
The one downside is that most of the examples in these tutorials assume you are extending a more complex robot class called AdvancedRobot. I resisted the urge to do that, and stuck with extending the more basic Robot, since I try to avoid premature optimization and also try to follow the KISS principle. The difference between Robt and AdvancedRobot is basically that Robot actions are blocking and AdvancedRobot actions aren't: the Robocode platform allows the non-blocking actions of an AdvancedRobot to be executed simultaneously, while the regular Robot can't walk and chew gum at the same time, but can fake it if he alternates the two fast enough.
One of the first things I did was shamelessly "borrow" two real real nifty and convenient classes from these tutorials, called EnemyRobot and AdvancedEnemyRobot. The former accomplishes the basic function of taking a snapshot of the robot that was just scanned and putting the information from that snapshot in a convenient object. The latter adds basic target prediction by linear two point extrapolation. Not terribly impressive, but a good jumping-off point for moving on to the higher level tactics and strategy.
Speaking of good old strategery, mine is basically a variation of strafing. Strafing means squaring off against your opponent, in other words pointing your tank perpendicular to the imaginary line joining you and your opponent. If you just keep going like this and in small enough increments, you end up "circling" your opponent (in reality you are traversing a bunch of small line segments that look like a curve but are just a series of line segments).
The two obvious variations on strafing are spiraling, or incrementally getting closer to the opponent like you are a satellite crashing into a planet, and veering away in an inverse spiral. The variation I devised, is simply to alternate between coming closer and going farther. This seems like a logical choice since I didn't want to design my robot for close quarters combat, which seems like a potshot anyway, and neither did I want to design my robot to run away like a bullied robot. The alternating motion between closing in and and veering out, also has the nice side-effect of making the robot's movements harder to predict and therefore making the robot harder to target.
I also added some probabilistic firing to make my robot stingy when dealing with a distant target. After all, in robocode, the energy expanded in shooting a bullet comes right out of the robot's life reserve, and if necessary, I want my robot to be able to play it cool and wait for the enemy to have no more gas in the tank, so to speak.
Finally, I added some wall avoidance code adopted from equivalent code in AdvancedRobot modules, and though it doesn't perform as well as its prototype, I didn't expect it to, since executing one action at a time does impose some limitations.
Presto, facto, finito. My robot was easily beating sample robots with intimidating names such as Walls, Ramfire, Fire, Corners, Tracker, and SittingDuck. Actually SittingDuck only sounds scary, in reality all it does it sit there like some slow bird. My robot comes up the victor most of the time against two other robots, called Spinbot and Crazy, but not always. But hey, my robot is rational and I didn't design it to face crazy people or their crazy robots. Spinbot, I should be able to beat, and I'm tweaking my robot intermittently, between taking breaks writing this blog entry. What is does well is to mave around fast in a circle, and I haven't yet added code to my robot to defeat that exact behavior.
At this point, I should mention that I did not write most of my tests until after I wrote most of my code and let the robot loose on the battlefield, and as usual, I ended up regretting that. Regardless, I did eventually add a few tests, of the unit, behavioral, and acceptance types.
I test that line between robot and enemy and one time instant is perpendicular to line between robot and itself at that time instant and at the time instant two ticks in the future. This is because it takes a robot in Robocode two ticks to react: one to get the information and one to take action.
I also test that the robot never stops (that it never occupies the same spot in N consecutive time instants, where I test the case of n=5).
Perhaps it's trivial, but I also test that the robot fires at lest 5 shots in each battle.
A useful sanity-check type test is to test that the robot detects the enemy at least once for every 360 degree turn of the radar. These are the types of checks that are useful, if nothing else, than for conforming intuition and understanding of the robocode rules and event model.
Finally, I test that the robot does not stop permanently after hitting a wall.
The most diffucult thing about testing robocode robot behavior is the lack of a one-time-click simulator. It would be useful to have a robocode module that allows the visualization of a robot at one tick and at the next one or at the nest n time ticks, to see the effects of different actions.
As with any simulation framework, there's an inherent inefficiency in testing behavior since the framework is not designed to show snapshots but to show the evolution of a battle in time.
The most important lesson I came away with from my robocode coding and testing, is that especially in a simulation environment, where environment variables can influence the behavior of the programmed module, it pays to make small changes incremetally, and test them as you go. Isolating specific behaviors can allow the testing of them independently, and to do this, it pays to have a modular design that allows the switching on and off of features, if only for testing purposes. Good thing I was using version control and that allowed me to have the piece of mind of knowing I could always revert back to a reasonably efficient robot if I got ahead of myself and ended up crippling my robot with too much complexity.
Ant aint just XML
So, after having used rake a few times, I jumped head first into the Java build/deploy tool called Ant. It came out of the Apache project and was originally a custom build tool for Tomcat.
Put simply, Ant is Xml. A lot like Xslt (which it itself Xml), an Ant file embodies tasks that are meant to be executed. These tasks are executed in Java or in the language of the underlying shell. Like Xslt, and unlike Html, Ant seems to be Turing complete, meaning you theoretically can use it to perform whatever calculations you want.
Unlike Xslt though, Ant is partly a declarative language that lets the programmer specify tasks and dependencies, and figures out how to execute those tasks without violating the dependencies. In other words, with some of its features (like dependencies), it lets you specify the what, instead of specifying the how. Other features ares plainly procedural and are just code in Xml clothing: for example the tasks to create, and delete directories, and to compile and run code.
It takes a bit of getting used to the declarative programming paradigm, but it does save a bit of work to just declare stuff and let Ant figure out how to execute it. In particular, the dependency information is specified declaratively, and Ant figures out the dependency graph, makes sure there are no cycles in it, and finds a serialization to execute it.
There were no major surprises for me, except for the fact that even Ant couldn't prevent me from wasting 5-10 minutes of my time figuring out a ClassNotFound exception. I had neglected to remove the .class suffix from the main class specifier string, and Ant happily fed the wrong classname argument to java.
Some useful features I ended up using over and over were the tasks to build and delete directories, the import feature that allows the importing of other Ant files and the property task that allows the creation of immutable variables.
I was able to write a series of Ant files to compile, create javadoc for, run, and package into a zip file, a simple Java project consisting of one source file.
The cool thing about the zip file output, is that you can unzip it (or unjar it since jar files are zip files), and run ant from within the unzipped directory to create a new zip file. This is what allows people to easily and portably share, extend, and modify, Ant based projects.
The only feature I wasn't able to find the the Ant documentation is about how to create a top level directory in the zip file to include all the files you actually want to zip up. I know from many unpleasant surprises, that unzipping a zip file with no single top level directory can inadvertently flood your current directory with a bunch of files you don't want there.
Put simply, Ant is Xml. A lot like Xslt (which it itself Xml), an Ant file embodies tasks that are meant to be executed. These tasks are executed in Java or in the language of the underlying shell. Like Xslt, and unlike Html, Ant seems to be Turing complete, meaning you theoretically can use it to perform whatever calculations you want.
Unlike Xslt though, Ant is partly a declarative language that lets the programmer specify tasks and dependencies, and figures out how to execute those tasks without violating the dependencies. In other words, with some of its features (like dependencies), it lets you specify the what, instead of specifying the how. Other features ares plainly procedural and are just code in Xml clothing: for example the tasks to create, and delete directories, and to compile and run code.
It takes a bit of getting used to the declarative programming paradigm, but it does save a bit of work to just declare stuff and let Ant figure out how to execute it. In particular, the dependency information is specified declaratively, and Ant figures out the dependency graph, makes sure there are no cycles in it, and finds a serialization to execute it.
There were no major surprises for me, except for the fact that even Ant couldn't prevent me from wasting 5-10 minutes of my time figuring out a ClassNotFound exception. I had neglected to remove the .class suffix from the main class specifier string, and Ant happily fed the wrong classname argument to java.
Some useful features I ended up using over and over were the tasks to build and delete directories, the import feature that allows the importing of other Ant files and the property task that allows the creation of immutable variables.
I was able to write a series of Ant files to compile, create javadoc for, run, and package into a zip file, a simple Java project consisting of one source file.
The cool thing about the zip file output, is that you can unzip it (or unjar it since jar files are zip files), and run ant from within the unzipped directory to create a new zip file. This is what allows people to easily and portably share, extend, and modify, Ant based projects.
The only feature I wasn't able to find the the Ant documentation is about how to create a top level directory in the zip file to include all the files you actually want to zip up. I know from many unpleasant surprises, that unzipping a zip file with no single top level directory can inadvertently flood your current directory with a bunch of files you don't want there.
Tuesday, September 20, 2011
Robocodes and Katacodes
There's a concept of kata that means practice and that some have interpreted as entailing "effortful learning". It seems to me that any kid of learning is usually effortful as long as the subject matter is challenging and the task represents a new type of task, qualitatively different from the concepts already learned or practiced before.
My experimentation with the concept of kata as software engineers have used the term, consisted of trying to accomplish 13 tasks in RoboCode. The tasks are outlined below. But first a brief digression on what Robocode actually is. It is a simulation framework that allows software robots to "fight" against each other. The robots represent and are rendered as battle tanks on the screen, and competitors and programmers get to program their own robots to act independently of human input once a battle has begun. The objective is for a robot to kill all other robots in a battle by using movement, radar, and shooting.
The Robocode framework is rich and includes many useful method call for getting the bearings and distance of enemy tanks detected by radar, methods for moving, for turning the tank gun, and for turning the tank radar. The tank is characterized by a set of parameters determining its speed, gun turn speed, radar turn speed, acceleration, bullet energy maximum capacity, and several other physical factors that can make Robocode competition a fairly complex business with lots of tradeoffs, strategy and tactical considerations, and design choices.
Returning to the tasks, here they are:
1. Position01: The minimal robot. Does absolutely nothing at all.
2. Position02: Move forward a total of 100 pixels per turn. When you hit a wall, reverse direction.
3. Position03: Each turn, move forward a total of N pixels per turn, then turn right. N is initialized to 15, and increases by 15 per turn.
4. Position04: Move to the center of the playing field, spin around in a circle, and stop.
5. Position05: Move to the upper right corner. Then move to the lower left corner. Then move to the upper left corner. Then move to the lower right corner.
6. Position06: Move to the center, then move in a circle with a radius of approximately 100 pixels, ending up where you started.
7. Follow01: Pick one enemy and follow them.
8. Follow02: Pick one enemy and follow them, but stop if your robot gets within 50 pixels of them.
9. Follow03: Each turn, Find the closest enemy, and move in the opposite direction by 100 pixels, then stop.
10. Boom01: Sit still. Rotate gun. When it is pointing at an enemy, fire.
11. Boom02: Sit still. Pick one enemy. Only fire your gun when it is pointing at the chosen enemy.
12. Boom03: Sit still. Rotate gun. When it is pointing at an enemy, use bullet power proportional to the distance of the enemy from you. The farther away the enemy, the less power your bullet should use (since far targets increase the odds that the bullet will miss).
13. Boom04: Sit still. Pick one enemy and attempt to track it with your gun. In other words, try to have your gun always pointing at that enemy. Don't fire (you don't want to kill it).
So far, I've been able to accomplish all but tasks 6 and 9. The difficulty in task 6 is geometric. There seems to be no direct way of specifying movement along a circular trajectory of a certain radius. Moving perpendicular to an intended focus will produce circular movement with that focus as the center, but the size of the turning radius is given by a combination of the movement speed and turn update rate. Very small movement and a high turn update rate can probably produce the desired effect. Another approach is to discretize a circle into a bunch of waypoints (representing a polygon) and then travel from one waypoint to the next repeatedly.
I think my difficult with task 9 stems from an incomplete understanding of Robocode's event processing loop and how it goes about intermixing events (like ScannedRobotEvents) and a robot's run method instructions. This was complicated by alternating between extending Robot which uses asynchronous blocking actions and AdvancedRobot which uses synchronous non-blocking actions.
What this means is I'll have to go back and get the 30,000 foot view or Robocode's event model again, before I delve back into the low level specifics of useful method calls for accomplishing basic movement, targeting, and tracking.
The most difficult thing about accomplishing the tasks from a software development point of view is the absence of a methodology for objective testing and validation of the robots. Depending on the behavior of a robot on the field, a robot may achieve the desired behavior to a greater or lesser extent. The Robocode framework also seems to lack the modular structure that would allow unit testing of specific functions to validate their correct behavior independent of other functions. This is probably endemic to simulation frameworks since they depend on the active progress of the clock and the actors, but a limited capability to test trajectory and movement functionality would be very useful.
My experimentation with the concept of kata as software engineers have used the term, consisted of trying to accomplish 13 tasks in RoboCode. The tasks are outlined below. But first a brief digression on what Robocode actually is. It is a simulation framework that allows software robots to "fight" against each other. The robots represent and are rendered as battle tanks on the screen, and competitors and programmers get to program their own robots to act independently of human input once a battle has begun. The objective is for a robot to kill all other robots in a battle by using movement, radar, and shooting.
The Robocode framework is rich and includes many useful method call for getting the bearings and distance of enemy tanks detected by radar, methods for moving, for turning the tank gun, and for turning the tank radar. The tank is characterized by a set of parameters determining its speed, gun turn speed, radar turn speed, acceleration, bullet energy maximum capacity, and several other physical factors that can make Robocode competition a fairly complex business with lots of tradeoffs, strategy and tactical considerations, and design choices.
Returning to the tasks, here they are:
1. Position01: The minimal robot. Does absolutely nothing at all.
2. Position02: Move forward a total of 100 pixels per turn. When you hit a wall, reverse direction.
3. Position03: Each turn, move forward a total of N pixels per turn, then turn right. N is initialized to 15, and increases by 15 per turn.
4. Position04: Move to the center of the playing field, spin around in a circle, and stop.
5. Position05: Move to the upper right corner. Then move to the lower left corner. Then move to the upper left corner. Then move to the lower right corner.
6. Position06: Move to the center, then move in a circle with a radius of approximately 100 pixels, ending up where you started.
7. Follow01: Pick one enemy and follow them.
8. Follow02: Pick one enemy and follow them, but stop if your robot gets within 50 pixels of them.
9. Follow03: Each turn, Find the closest enemy, and move in the opposite direction by 100 pixels, then stop.
10. Boom01: Sit still. Rotate gun. When it is pointing at an enemy, fire.
11. Boom02: Sit still. Pick one enemy. Only fire your gun when it is pointing at the chosen enemy.
12. Boom03: Sit still. Rotate gun. When it is pointing at an enemy, use bullet power proportional to the distance of the enemy from you. The farther away the enemy, the less power your bullet should use (since far targets increase the odds that the bullet will miss).
13. Boom04: Sit still. Pick one enemy and attempt to track it with your gun. In other words, try to have your gun always pointing at that enemy. Don't fire (you don't want to kill it).
So far, I've been able to accomplish all but tasks 6 and 9. The difficulty in task 6 is geometric. There seems to be no direct way of specifying movement along a circular trajectory of a certain radius. Moving perpendicular to an intended focus will produce circular movement with that focus as the center, but the size of the turning radius is given by a combination of the movement speed and turn update rate. Very small movement and a high turn update rate can probably produce the desired effect. Another approach is to discretize a circle into a bunch of waypoints (representing a polygon) and then travel from one waypoint to the next repeatedly.
I think my difficult with task 9 stems from an incomplete understanding of Robocode's event processing loop and how it goes about intermixing events (like ScannedRobotEvents) and a robot's run method instructions. This was complicated by alternating between extending Robot which uses asynchronous blocking actions and AdvancedRobot which uses synchronous non-blocking actions.
What this means is I'll have to go back and get the 30,000 foot view or Robocode's event model again, before I delve back into the low level specifics of useful method calls for accomplishing basic movement, targeting, and tracking.
The most difficult thing about accomplishing the tasks from a software development point of view is the absence of a methodology for objective testing and validation of the robots. Depending on the behavior of a robot on the field, a robot may achieve the desired behavior to a greater or lesser extent. The Robocode framework also seems to lack the modular structure that would allow unit testing of specific functions to validate their correct behavior independent of other functions. This is probably endemic to simulation frameworks since they depend on the active progress of the clock and the actors, but a limited capability to test trajectory and movement functionality would be very useful.
Wednesday, August 31, 2011
FizzBuzz FooBar
Interesting story here about an interview problem called FizzBuzz. The author make the surprising claim that most interview candidates can't solve a simple problem called FizzBuzz:
Write a program that prints the numbers from 1 to 100. But for multiples of three print “Fizz” instead of the number and for the multiples of five print “Buzz”. For numbers which are multiples of both three and five print “FizzBuzz”.
It took me about 9 minutes to write the FizzBuzz.java file and another 5 minutes to write the FizzBuzzTest file. The application ran immediately and all the tests also passed the first time out of the gate.
From the outset, I went with a modular design separating real functions from those with side effects. The function getOutput is responsible for determining what string to output for a given integer, and there is even a printNumbers function, which does the looping and printing, so as to keep the main function as simple as possible.
When I wrote the tests, I remembered that I had to make the getOutput method of FizzBuzz public so that I could write a unit test for it. After some googling, I see that there are test frameworks like Mockito that allow testing of private methods via the use of special class loaders. Will have to try that in the future as I don't like having to change the class under test just to make its methods testable.
Monday, August 29, 2011
Analysis of Java OpenCSV csv parser
OpenCSV Java csv parser and generator
According to Prof Philip Johnson of the Univ of Hawaii at Manoa, there are three prime directives for open source software engineering:
The three Prime Directives for Open Source Software Engineering are:
1. The system successfully accomplishes a useful task.
2. An external user can successfully install and use the system.
3. An external developer can successfully understand and enhance the system.
In this entry I'll analyze the Java OpenCsv csv parser at http://sourceforge.net/projects/opencsv/ according to these three metrics and see where is stands.
This package satisfies the 1st prime directive (accomplishes at least one useful function) with flying colors and even goes beyond the advertised description by providing csv parsing and output, complete with configurable delimiter and quote character selection.
It satisfies the 2nd prime directive (easy user installation and use), if only because the configuration is very simple. The unzipped contents of the downloaded package do not contain a readme.txt file, but they do include a one-page tutorial in the doc directory. The code is also javadoc compliant and includes javadoc documentation(automatically generated from the comments). However, to get the included example apps to work, I had to infer from a path-not-found exception that the examples should be left in the examples directory to be found, as the path to their inputs is hard-coded in the examples code. Instead of running the code via command line from the unzipped directory, I had copied the deployment jar file (in the deploy directory) and the examples (in the examples directory) to an Eclipse project I created to test the app. This disturbed the directory layout the example code was designed to work with. If there were a README.txt file specifying that, I probably wouldn't have made that mistake.
The 3rd prime directive (easy understanding of internals and extensibility), is also clearly satisfied so much so that two added features not written by the original package author, are included in the package. They are JavaBean to csv conversion, and relational database table to csv conversion. The 3rd directive is also satisfied because the package includes maven build files and a directory layout that reflects an automated build and deployment process, and extensive unit and integration testing code(in the test directory).


Subscribe to:
Posts (Atom)