I recently extended a project I hadn't written, adding a few more features to it. I worked collaboratively with 2 other people using the IDPM (Issue Driven project Management) process. The three of us divided our task into more or less equal parts, and went about adding the desired functionality to the system. The system is a power an energy monitoring platform, and the functionality we added provides commands to set baseline, goals, and monitoring of those goals in terms of power and energy consumption.
During the entire development process, I was lucky enough to never once have a source repository merge conflict with any of my partners' code. I guess we all knew how to say out of each other's way, using package names and classes and encapsulation and separation mechanisms to make the code modular and flexible.
But the thing that most simplified our task was the build system infrastructure included with the system. It took care of dependency downloading, compilation, deployment, testing, automated quality assurance, and more. I'm talking about the Ant build system, which along with Maven, is the most used in the Java community and works very well with Java and its tools. The project came with a complete and modular Ant build specification, constituting about 10 Ant files (which are XML files using the tags in the Ant namespace), so that we were able to get off the ground and running within seconds of downloading it.
We did have to meet a few times though to sort out external interfaces and dependency issues, like what modules depended on what modules, and therefore which modules needed to be completed first. At our meetings we quickly got down to specifics and used an eraser board to write down pseudo-code, interfaces, and object definitions. One of my partners even took a cell phone snapshot of the eraser board for future reference.
Possibly even as useful as the automated build system was the continuous integration process we used to develop the system enhancements, a combination of the Jenkins build system and the google Subversion project hosting. Had we ever run into compilation or testing problems, we could have easily known who was responsible and could have easily rolled back the system to a previous healthy state if the problem could not be fixed.
The combination of automatic building, continuous integration, and automated quality assurance kept or code honest and its format uniform. One thing I found about working with quality assurance tools like Checkstyle, PMD and Findbugs, is that they frequently teach you about writing better, more effective and more correct, code. Many of the tips and suggestions included in books like "Effective Java" by Joshua Bloch will show up as warnings in PMD and Findbugs. It's great to have them there to remind you of when you're doing something unsound.
The whole process went so well, that I don't think we had a single failed build on the Jenkins server. Of course, I know that this implies that each one of us must have been running their Ant verify script locally every time before checking new code or configuration changes into the system. I know, that I can rarely make any change of more than a few lines to the system without Checkstyle, PMD, Findugs or Junit complaining. But this is good. I'd rather deal with one issue at a time than with dozens like I used to before I used unit testing on my code. And I'd rather QA tools force me to keep the code looking good and doing things the right way, as I code, instead of having to go back and do cleanup for hours on end. I used to hate that, and would often cut corners or delay until it was too late.
The more I think about it, these tools and this process are analogues of the things we do in our lives to make managing them more efficient and positive. Get some work done every day, don't push off unpleasant things to the point they pile up into a huge chore, and monitor your progress, so you don't experience any nasty surprises. It's common sense, and it's about time, the same stress reducing techniques used in real life, become mainstream in software development.
Codes and Things
Wednesday, December 14, 2011
Friday, December 2, 2011
YATR
Yet another technical review....
I just recently conducted a technical review of a system, which to protect the guilty shall remain anonymous. It's an application that is packaged in a zip file, and in it are all the ant file trappings that characterize a structured java development process. xml files for the ivy dependency manager, for junit testing, for jenkins continuous integration testing, and for automated QA assurance with three tools by the name of PMD, Findbugs, and Checkstyle. Here are my findings.
Well, first I visited the project's home at the google code site, and browsed to the source tab, at which point Google helpfully reminded me of the svn command string to check out the code to my machine, which I did. After that I cd'd into the project directory and ran ant -verify on the project, and saw that the code compiled, the unit tests were passing, and the automated QA assurance tests were passing to. That was a sign to me that I wasn't looking at vapor-ware, which is always a plus.
Then I fired up the system, by going to a Main.java file that had a public static void main, because I just a strange feeling that that was the app's starting point. I was right. Application started right up, and I get a prompt offering me the choice of 4 nifty commands for finding out how much energy is consumed in some buildings in a campus on an overinhabited tropical island. Commands seem to work fine, but I'm a prankster, and I purposefully direct my mischief at this poor innocent application by entering:
energy-since Mokihana-A 2011-11-
See that last string awas supposed to be a complete date in the yyyy-mm-dd format. Well, the app didn't like that. It answered:
2011-11- is not a valid date, or date format yyyy-MM-dd.
Reason: java.text.ParseException: Unparseable date: "2011-11-"
That's offensive. Just because I'm a prankster doesn't mean I want to see the ugly Java underbelly of the application, which I was idealizing as a nice Ruby app, and now my fantasy is shattered and I can't get Java out of my mind. Plus, I suddenly got the urge to go pick up the dragon book and write a parser, when I need to be chugging along in my handsomely compensated technical review job.
Ok, when all is said and written, it's got to be admitted the app does what it says it would do.
Now on to another pressing matter: is it amenable to easy use and installation. Well, yeah again. It's google code site is elegantly furnished with all the delicate and succulent tidbits you'd ever need to know if you wanted to install the app. It's main wiki page even has a reference to the project it depends on, though the authors forgot to put a download link directly on the main page. No matter, though one extra click gets even the laziest of mortals to the complete download page, which in addition to sporting the app distribution file in zip format, also offers a complimentary style format file suitable for use with the Eclipse plugin, and meant to have the Eclipse IDE (sold separately) do much of the styling for us, that the checkstyle plugin will yell at us if we don't do the right way.
The google site for the project even has a wiki entry for the coding standards used in the project, so that these standards might live on in the coding after-life.
Ok, score another one for the gipper. The last thing every black belt technical reviewer checks for: can the system extended and/or modified?
Well, looking at the development wiki, I can see that it's a jiffee. The authors of the project have graciously enumerated all the development facets of the application, providing sample commands for building, running a code coverage suite, testing, generating documentation from the javadoc comments, deploying, and instructions for adding commands to the actual application. The wiki developer page also includes brief sections on the issue driven project management collaboration process used to develop the application and a link to the Jenkins continuous integration server that the authors used to keep the system healthy and test it in neutral territory.
I ran the JavaDoc ant task and perused the generated JavaDoc. I was impressed with the quality of the comments and also with the use of special JavaDoc features like code links.
After running the Jacoco test coverage tool, Was able to see that the authors achieve a very high code coverage rate, with most of the untested branches being in the test code itself.
I also surveyed the test code, and while it is extensive, I though the design could use a little refactoring. In particular, the authors chose to put all the tests that check that the different commands run successfully in one Test case. That file is already a little bloated and would necessarily become more bloated as new commands are added to the system. If the commands merit their own classes, then they also merit their own test case classes and files.
I checked the project's issue page, and saw that the authors had divided up the tasks very neatly into feature or issue based chunks. They even created issues to explain time off from project work and multi-day absences from the project.
Finally, I started looking at the source code, which I could tell from successfully running the ant verify target, was at least at a certain level of quality. Right off I saw that the code included several custom exception classes and that's usually a good sign since those custom exceptions can carry project specific information and make extending the application easier.
The code base is also pretty modular and I found no overly complex parts or overly nested control structures. All in all, this seems to be a well-managed and well-designed project. It does what it's supposed to do, it's easy to use and install, and it lends itself to being extended and modified.
I just recently conducted a technical review of a system, which to protect the guilty shall remain anonymous. It's an application that is packaged in a zip file, and in it are all the ant file trappings that characterize a structured java development process. xml files for the ivy dependency manager, for junit testing, for jenkins continuous integration testing, and for automated QA assurance with three tools by the name of PMD, Findbugs, and Checkstyle. Here are my findings.
Well, first I visited the project's home at the google code site, and browsed to the source tab, at which point Google helpfully reminded me of the svn command string to check out the code to my machine, which I did. After that I cd'd into the project directory and ran ant -verify on the project, and saw that the code compiled, the unit tests were passing, and the automated QA assurance tests were passing to. That was a sign to me that I wasn't looking at vapor-ware, which is always a plus.
Then I fired up the system, by going to a Main.java file that had a public static void main, because I just a strange feeling that that was the app's starting point. I was right. Application started right up, and I get a prompt offering me the choice of 4 nifty commands for finding out how much energy is consumed in some buildings in a campus on an overinhabited tropical island. Commands seem to work fine, but I'm a prankster, and I purposefully direct my mischief at this poor innocent application by entering:
energy-since Mokihana-A 2011-11-
See that last string awas supposed to be a complete date in the yyyy-mm-dd format. Well, the app didn't like that. It answered:
2011-11- is not a valid date, or date format yyyy-MM-dd.
Reason: java.text.ParseException: Unparseable date: "2011-11-"
That's offensive. Just because I'm a prankster doesn't mean I want to see the ugly Java underbelly of the application, which I was idealizing as a nice Ruby app, and now my fantasy is shattered and I can't get Java out of my mind. Plus, I suddenly got the urge to go pick up the dragon book and write a parser, when I need to be chugging along in my handsomely compensated technical review job.
Ok, when all is said and written, it's got to be admitted the app does what it says it would do.
Now on to another pressing matter: is it amenable to easy use and installation. Well, yeah again. It's google code site is elegantly furnished with all the delicate and succulent tidbits you'd ever need to know if you wanted to install the app. It's main wiki page even has a reference to the project it depends on, though the authors forgot to put a download link directly on the main page. No matter, though one extra click gets even the laziest of mortals to the complete download page, which in addition to sporting the app distribution file in zip format, also offers a complimentary style format file suitable for use with the Eclipse plugin, and meant to have the Eclipse IDE (sold separately) do much of the styling for us, that the checkstyle plugin will yell at us if we don't do the right way.
The google site for the project even has a wiki entry for the coding standards used in the project, so that these standards might live on in the coding after-life.
Ok, score another one for the gipper. The last thing every black belt technical reviewer checks for: can the system extended and/or modified?
Well, looking at the development wiki, I can see that it's a jiffee. The authors of the project have graciously enumerated all the development facets of the application, providing sample commands for building, running a code coverage suite, testing, generating documentation from the javadoc comments, deploying, and instructions for adding commands to the actual application. The wiki developer page also includes brief sections on the issue driven project management collaboration process used to develop the application and a link to the Jenkins continuous integration server that the authors used to keep the system healthy and test it in neutral territory.
I ran the JavaDoc ant task and perused the generated JavaDoc. I was impressed with the quality of the comments and also with the use of special JavaDoc features like code links.
After running the Jacoco test coverage tool, Was able to see that the authors achieve a very high code coverage rate, with most of the untested branches being in the test code itself.
I also surveyed the test code, and while it is extensive, I though the design could use a little refactoring. In particular, the authors chose to put all the tests that check that the different commands run successfully in one Test case. That file is already a little bloated and would necessarily become more bloated as new commands are added to the system. If the commands merit their own classes, then they also merit their own test case classes and files.
I checked the project's issue page, and saw that the authors had divided up the tasks very neatly into feature or issue based chunks. They even created issues to explain time off from project work and multi-day absences from the project.
Finally, I started looking at the source code, which I could tell from successfully running the ant verify target, was at least at a certain level of quality. Right off I saw that the code included several custom exception classes and that's usually a good sign since those custom exceptions can carry project specific information and make extending the application easier.
The code base is also pretty modular and I found no overly complex parts or overly nested control structures. All in all, this seems to be a well-managed and well-designed project. It does what it's supposed to do, it's easy to use and install, and it lends itself to being extended and modified.
Monday, November 28, 2011
IDPM
Somewhere between the laborious and overcomplicated world of RUP and UML, and the seat-of-the-pants just-hack-the-code approach there has to lie a sweet spot in project management methodology. Well, Issue Driven Project Management is one of the contenders for moderation, somewhere between the stifling bureaucracy of RUP and the kiddie-script amateurism of making it up as you go. It's a technique whereby you orient your project around solving issues, which can be defects or feature additions, each of which should take no more than one or two work days to complete. Each issue is created, then either assigned to an individual or taken up voluntarily by an individual, and then is considered accepted. From then on, its status can be changed to fixed, invalid, duplicate, etc. In other words, each issue has its own life-cycle. The life-cycle stages of an issue can even be tied to the version control system by using appropriate tags on the commit messages. The issue list is best views as a matrix where the rows are the project contributors and the columns, the issues, and various project management tools can be massages to provide this perspective, for example the google code hosting service interface.
My first experience in IDPM took place last week on two week long project with 3 total project members. Between the continuous integration process we were using, a Jenkins build getting its code updates from the google code hosting site, and the IDPM, we were able to communicate fairly effectively with each other. Just by glancing that the IDPM matrix, I could see whether there were any tasks assigned to me or needing an assignee, and what the progress of my teammates was toward completing the project, and their current accepted issues.
IDPM's break-up of a project into small tasks is a nice way to divide a project into chunks so you can always see the end of the tunnel. It also avoids the tendency to optimize prematurely, make the architecture more general than it need to be for the current task at hand, or add luxury features even before the basic application is working.
In a sense, IDPM is like test driven development at a larger scale: first pick a task, then get it working, then pick another, and so on. It's all about incrementing functionality, not incrementing architecture or the size of the code base. This is a very pragmatic approach and leads to less stress than the attrition-based coding model, where you just code until there's nothing left to code. That model can result in lost productivity, where the major architecture undergoes several changes and redo's just because the developer doesn't yet have any features working, and thus doesn't have a concrete idea of what the architecture needs to provide, just an inferred one based on a design that hasn't been translated to code.
In essence, IDMP is an organic way to grow software, and every evolutionary change in the software is driven by some meed to satisfy a feature. This works well, and may not be the only good technique between the extremes of RUP and just-coding, but shows how well a moderate approach performs even in software development.
My first experience in IDPM took place last week on two week long project with 3 total project members. Between the continuous integration process we were using, a Jenkins build getting its code updates from the google code hosting site, and the IDPM, we were able to communicate fairly effectively with each other. Just by glancing that the IDPM matrix, I could see whether there were any tasks assigned to me or needing an assignee, and what the progress of my teammates was toward completing the project, and their current accepted issues.
IDPM's break-up of a project into small tasks is a nice way to divide a project into chunks so you can always see the end of the tunnel. It also avoids the tendency to optimize prematurely, make the architecture more general than it need to be for the current task at hand, or add luxury features even before the basic application is working.
In a sense, IDPM is like test driven development at a larger scale: first pick a task, then get it working, then pick another, and so on. It's all about incrementing functionality, not incrementing architecture or the size of the code base. This is a very pragmatic approach and leads to less stress than the attrition-based coding model, where you just code until there's nothing left to code. That model can result in lost productivity, where the major architecture undergoes several changes and redo's just because the developer doesn't yet have any features working, and thus doesn't have a concrete idea of what the architecture needs to provide, just an inferred one based on a design that hasn't been translated to code.
In essence, IDMP is an organic way to grow software, and every evolutionary change in the software is driven by some meed to satisfy a feature. This works well, and may not be the only good technique between the extremes of RUP and just-coding, but shows how well a moderate approach performs even in software development.
Tuesday, November 8, 2011
Learning the ropes with WattDepot
Here at UH, there's a url at which you can just about see how much energy each floor of several for the university dorms is consuming. As part of a federally funded study, the university has outfitted the energy meters and submeters in several of the dorms with power, voltage, and energy consumption sensors, that take readings at the sub-minute level and relay them back to a central server. By visiting a url at hat central server, you can pretty much see what energy consumption looks like for college dorm students.
The university's computer science department has designed and implemented a framework for energy data sharing, uniform access, storage, dissemination, analysis, and visualization, called WattDepot. The API is hosted on the Google Projects site, and once you download it, you can have some command line interactions with the server working in just a few minutes. WattDepot provide a nice API designed to hide the lower layer protocols, that makes interacting with the server much like interacting with a file on your local machine.
First I implemented a class called SourceListing that just listed the sensors associated with a particular WattDepot server, which is known as the "owner" of those sensors. This was pretty simple and pretty much spelled out on the WattDepot documentation examples on the Google project page. THis took me about 15 minutes to write the code and another 15 minutes to set up my Eclipse IDE to associate the framework source and javadocs with the library jar file, for easier editing, code completion, and javadoc perusal.
Then I wrote a SourceLatency class that sorted all the sources at the same url by latency, and this invloved using the code I already had and writing an anonymous inner class to do the comparison of latencies by implementing the Comparable interface. This took me about 20 minutes.
After that, I wrote a SourceHierarchy class that uses the subsources attribute of each source, spelling out which sources are its "children", to construct a set of trees of sources, and then print out those trees recursively using indentation to visualize the hierarchy much like files are shown in a file system browser, or on the command line using the Unix "tree" command. This took me about a half hour, since it involved formulating a game plan to build the trees in the most economic way. What I did was to simply find out what the roots were, by going through all the sources and eliminating all children as root candidates. Then I printed the trees at those roots recursively, so there's never an explicit actual tree data structure in RAM, but I'm able to print out the tree structure nonetheless.
After that, I was energized so to say, and I proceeded to writing a class called EnergyYesterday. Here started the pain. I had to find out how to have Java give me yesterday's date, so as not to hard code it, and I had to find out how to translate between various date formats: XMLGregorianCalendar which WattDepot uses, java.util.Date and java.util.Calendar. Well, let's just say I found some code online that braves this tedious translation, I put in in a class, duly attributing it of course, and doing date calculations should be much easier from here going forward. This class took me maybe an hour and a half to write, much of it having been spent wrestling with dates and date formatting, cleaning up useful date conversion code I found on the web, and making utility classes out of it. THis task would have been garder had it not been for a simple WattDepot API function called getEnergyConsumed that takes two timestamps and provides the total energy consumed between them, so no adding or aggregating was necessary in my part. I just had to loop through all the sources at
After that, I did some energy analysis in a class called HighestRecordedPowerYesterday, which uses an API call by the name of getPowerConsumed, but here I had to loop through each sensor's data points between the starting and ending timestamp and keep track of the maximum power consumed and its related timestamp. This took me about 45 minutes, and I borrowed much of the code from the WattDepot wiki documentation on the Google project site.
Finally, in a class called MondayAverageEnergy, I wrote code to average the energy consumed at each sensor for this past Monday and the one before it. Here I hard codes the dates, and the code is a minor extension of EnergyYesterday. This took me about a half hour.
Now that I accomplished these tasks, I feel like I have a decent grip on the WattDepot client API, and the workflow for getting data from the WattDepot servers, but I have a feeling there's much more to the WattDepot framework, and I'll definitely be wring more about that in the next few weeks.
The university's computer science department has designed and implemented a framework for energy data sharing, uniform access, storage, dissemination, analysis, and visualization, called WattDepot. The API is hosted on the Google Projects site, and once you download it, you can have some command line interactions with the server working in just a few minutes. WattDepot provide a nice API designed to hide the lower layer protocols, that makes interacting with the server much like interacting with a file on your local machine.
First I implemented a class called SourceListing that just listed the sensors associated with a particular WattDepot server, which is known as the "owner" of those sensors. This was pretty simple and pretty much spelled out on the WattDepot documentation examples on the Google project page. THis took me about 15 minutes to write the code and another 15 minutes to set up my Eclipse IDE to associate the framework source and javadocs with the library jar file, for easier editing, code completion, and javadoc perusal.
Then I wrote a SourceLatency class that sorted all the sources at the same url by latency, and this invloved using the code I already had and writing an anonymous inner class to do the comparison of latencies by implementing the Comparable interface. This took me about 20 minutes.
After that, I wrote a SourceHierarchy class that uses the subsources attribute of each source, spelling out which sources are its "children", to construct a set of trees of sources, and then print out those trees recursively using indentation to visualize the hierarchy much like files are shown in a file system browser, or on the command line using the Unix "tree" command. This took me about a half hour, since it involved formulating a game plan to build the trees in the most economic way. What I did was to simply find out what the roots were, by going through all the sources and eliminating all children as root candidates. Then I printed the trees at those roots recursively, so there's never an explicit actual tree data structure in RAM, but I'm able to print out the tree structure nonetheless.
After that, I was energized so to say, and I proceeded to writing a class called EnergyYesterday. Here started the pain. I had to find out how to have Java give me yesterday's date, so as not to hard code it, and I had to find out how to translate between various date formats: XMLGregorianCalendar which WattDepot uses, java.util.Date and java.util.Calendar. Well, let's just say I found some code online that braves this tedious translation, I put in in a class, duly attributing it of course, and doing date calculations should be much easier from here going forward. This class took me maybe an hour and a half to write, much of it having been spent wrestling with dates and date formatting, cleaning up useful date conversion code I found on the web, and making utility classes out of it. THis task would have been garder had it not been for a simple WattDepot API function called getEnergyConsumed that takes two timestamps and provides the total energy consumed between them, so no adding or aggregating was necessary in my part. I just had to loop through all the sources at
After that, I did some energy analysis in a class called HighestRecordedPowerYesterday, which uses an API call by the name of getPowerConsumed, but here I had to loop through each sensor's data points between the starting and ending timestamp and keep track of the maximum power consumed and its related timestamp. This took me about 45 minutes, and I borrowed much of the code from the WattDepot wiki documentation on the Google project site.
Finally, in a class called MondayAverageEnergy, I wrote code to average the energy consumed at each sensor for this past Monday and the one before it. Here I hard codes the dates, and the code is a minor extension of EnergyYesterday. This took me about a half hour.
Now that I accomplished these tasks, I feel like I have a decent grip on the WattDepot client API, and the workflow for getting data from the WattDepot servers, but I have a feeling there's much more to the WattDepot framework, and I'll definitely be wring more about that in the next few weeks.
Tuesday, November 1, 2011
Energy in Hawaii
Hawaii presents a set of of unique challenges and on the other hand fertile opportunities in the areas of green energy production and energy conservation. Because Hawaii is isolated, it is not part of a supergrid like the contiguous US states are, and therefore has to import its energy from further away, and is more susceptible to temporary supply/demand imbalances. On the other hand, because of its unique location, year-round sunny climate, and large number of shoreline miles, Hawaii is an ideal candidate for the early success of wind, sea, and sun, energy solutions.
Some of these solutions aren't yet cost-effective on the mainland due to the easy exploitation of cheap fossil fuels or the technological and infrastructure gap between fossil fuel production and green energy production. However, they are actually already cost-effective in Hawaii, despite the same technological gap, because Hawaii pays a surcharge on its electrical energy costs, due to transportation and isolation, that makes electrical energy 2-3 times more expensive than on the mainland. Unlike the mainland, Hawaii derives most of its electrical energy from oil, which is the most expensive fossil fuel.
Because of this unique economic situation, green energy is being pursued in Hawaii in a more aggressive way than in most mainland states, even by Republican administrations such as that of former governor Linda Lingle, who signed the Hawaii Clean Energy initiative, thus going against the grain of a Republican party that is largely skeptical of global warming, even to the point that many Republicans view global warming as a liberal scheme to stifle capitalism.
However, green energy production and use, does pose a few challenges. One of them is that green energy is typically intermittent. For example, a solar panel doesn't produce energy at night, and a windmill doesn't produce energy when there is no wind. So, to make the best use of these intermittent and time-varying energy sources, we need to integrate them within our current energy grid in a cohesive and efficient manner.
This means that two-way communication *and* control, need to take place between the energy consumer's home and the the electric power plant, and between the devices in the consumer's home. This would allow, for example, an electrical plant facing a demand spike to temporarily shut down the air conditioning system of some of their clients homes alternately for a few minutes to allow the smoothing out of the spike. It would also allow a home's air conditioning system to use more energy when that home's solar panels are producing more energy, thus changing the temperature activation set-point of the air-conditioner, to align itself with the temporarily greater energy being produced by the solar panels.
Of course, all this information communication and control, requires a a decent amount of hardware and even more software, as we strive to make the devices smarter and as we get new ideas for how to program and reprogram them. So, intelligent and robust software has a definite role to play in a green energy infrastructure. Beyond that, it may have an even bigger role to play in green energy research and calibration, as we need a way to visualize information in a decentralized manner, so that information from many places can be aggregated and viewed on one terminal. Even when deploying proven green energy solutions, an individual household will still want to adjust the parameters of the programs driving their energy saving devices, to optimize their individual household energy savings and target them at their specific needs, and this too requires software for input, validation, and communication.
So, it's nice to know that software is going to play an intricate role in the evolution of one of Hawaii's most exciting technology and research areas, and that getting the software right will save us all time, energy, and money.
Some of these solutions aren't yet cost-effective on the mainland due to the easy exploitation of cheap fossil fuels or the technological and infrastructure gap between fossil fuel production and green energy production. However, they are actually already cost-effective in Hawaii, despite the same technological gap, because Hawaii pays a surcharge on its electrical energy costs, due to transportation and isolation, that makes electrical energy 2-3 times more expensive than on the mainland. Unlike the mainland, Hawaii derives most of its electrical energy from oil, which is the most expensive fossil fuel.
Because of this unique economic situation, green energy is being pursued in Hawaii in a more aggressive way than in most mainland states, even by Republican administrations such as that of former governor Linda Lingle, who signed the Hawaii Clean Energy initiative, thus going against the grain of a Republican party that is largely skeptical of global warming, even to the point that many Republicans view global warming as a liberal scheme to stifle capitalism.
However, green energy production and use, does pose a few challenges. One of them is that green energy is typically intermittent. For example, a solar panel doesn't produce energy at night, and a windmill doesn't produce energy when there is no wind. So, to make the best use of these intermittent and time-varying energy sources, we need to integrate them within our current energy grid in a cohesive and efficient manner.
This means that two-way communication *and* control, need to take place between the energy consumer's home and the the electric power plant, and between the devices in the consumer's home. This would allow, for example, an electrical plant facing a demand spike to temporarily shut down the air conditioning system of some of their clients homes alternately for a few minutes to allow the smoothing out of the spike. It would also allow a home's air conditioning system to use more energy when that home's solar panels are producing more energy, thus changing the temperature activation set-point of the air-conditioner, to align itself with the temporarily greater energy being produced by the solar panels.
Of course, all this information communication and control, requires a a decent amount of hardware and even more software, as we strive to make the devices smarter and as we get new ideas for how to program and reprogram them. So, intelligent and robust software has a definite role to play in a green energy infrastructure. Beyond that, it may have an even bigger role to play in green energy research and calibration, as we need a way to visualize information in a decentralized manner, so that information from many places can be aggregated and viewed on one terminal. Even when deploying proven green energy solutions, an individual household will still want to adjust the parameters of the programs driving their energy saving devices, to optimize their individual household energy savings and target them at their specific needs, and this too requires software for input, validation, and communication.
So, it's nice to know that software is going to play an intricate role in the evolution of one of Hawaii's most exciting technology and research areas, and that getting the software right will save us all time, energy, and money.
Tuesday, October 25, 2011
5 questions about Java
1)What is erasure?
It refers to the fact that information of parameter type instantiation of generic Java classes does not exist at run time, but only at compile time.
2) What three properties should be satisfied by objects with respect to the .equals() method?
Reflexivity, symmetry, and transitivity.
3) How does test-driven development affect interface and API design?
Since the tests are written before their respective testee modules, the module interface and API are first formulated in writing the tests for those modules. Therefore, the interfaces and API"s are designed for usability and testability from the outset, and take into account the perspective from the outside of the class, as opposed to just the internal implementation oriented perspective.
4)Does inheritance hold with respect to the parameterized type in instantiations of generic classes in Java?
No, it only holds with respect to the container type. So, for example, List<Object> is not a superclass of List<String>, but Collection<String> is a superclass of List<String>, with the consequent implications for assignability.
5)What are the three levels of granularity for which we can disable PMD error testing?
Annotation level(classes or methods annotated with @SuppressWarnings), and line level (lines followed by //NOPMD).
For more info see: http://pmd.sourceforge.net/suppressing.html
It refers to the fact that information of parameter type instantiation of generic Java classes does not exist at run time, but only at compile time.
2) What three properties should be satisfied by objects with respect to the .equals() method?
Reflexivity, symmetry, and transitivity.
3) How does test-driven development affect interface and API design?
Since the tests are written before their respective testee modules, the module interface and API are first formulated in writing the tests for those modules. Therefore, the interfaces and API"s are designed for usability and testability from the outset, and take into account the perspective from the outside of the class, as opposed to just the internal implementation oriented perspective.
4)Does inheritance hold with respect to the parameterized type in instantiations of generic classes in Java?
No, it only holds with respect to the container type. So, for example, List<Object> is not a superclass of List<String>, but Collection<String> is a superclass of List<String>, with the consequent implications for assignability.
5)What are the three levels of granularity for which we can disable PMD error testing?
Annotation level(classes or methods annotated with @SuppressWarnings), and line level (lines followed by //NOPMD).
For more info see: http://pmd.sourceforge.net/suppressing.html
Friday, October 21, 2011
Adventures in Google Project Hosting
I set up a google project hosting page for my robocode Diablo robot, and it was a cinch. Like any helpful IDE, the project hosting website steps you through the process, by providing among other things, repository connection parameter and sample commands.
First things first. Google Project Hosting (which I'll call GPH ) uses the Subversion version and configuration control system. I happened to already be somewhat fluent in Subversion (SVN) and whatever rust I had wasn't much of a problem since many of the commands for SVN and Git (which I've been using at the expense of SVN for the past year), are identical.
The toughest part of getting my project hosted was just making sure I didn't create that extra directory at the top level that I end up making so often by mistake whether it's via creating a project in an IDE or importing one via version control. In this case, it just comes down to making sure to dump your project files directory into the repository's trunk directory. A little birdie told me about a neat trick to get this right: just checkout an empty skeleton project onto your local machine and then put the initial project contents into the folder created by SVN. After that, just commit the whole thing and the files will be back up. Walla: now you have avoided the extra directory problem.
The GPH site uses a specialized markup language for user documentation authoring, that is more leightweight and more easy on the eyes than html, and is similar to other markup languages in the market. Once you learn a few quirks, these languages have pretty low ceremony and pretty good utility for getting all the headers and paragraphs to look right.
First things first. Google Project Hosting (which I'll call GPH ) uses the Subversion version and configuration control system. I happened to already be somewhat fluent in Subversion (SVN) and whatever rust I had wasn't much of a problem since many of the commands for SVN and Git (which I've been using at the expense of SVN for the past year), are identical.
The toughest part of getting my project hosted was just making sure I didn't create that extra directory at the top level that I end up making so often by mistake whether it's via creating a project in an IDE or importing one via version control. In this case, it just comes down to making sure to dump your project files directory into the repository's trunk directory. A little birdie told me about a neat trick to get this right: just checkout an empty skeleton project onto your local machine and then put the initial project contents into the folder created by SVN. After that, just commit the whole thing and the files will be back up. Walla: now you have avoided the extra directory problem.
The GPH site uses a specialized markup language for user documentation authoring, that is more leightweight and more easy on the eyes than html, and is similar to other markup languages in the market. Once you learn a few quirks, these languages have pretty low ceremony and pretty good utility for getting all the headers and paragraphs to look right.
Subscribe to:
Posts (Atom)