Let’s move !


After 6 years working as Eclipse plug-ins developer I decided to take a new big challenge: start a PhD. I will still blog about Eclipse (but certainly less often) here because my PhD subject is about SOA and I’m sure that Eclipse is somewhere linked to it because Eclipse is everywhere, isn’t it ?

I would like also to take the opportunity of this post to THANK publicly Jerome  for his help during these 6 years: “thanks”.

Stay tuned for this new story

Object instances reuse and Eclipse API


Reading one of my colleague’s code I saw this:

GridLayout gl = new GridLayout();
gl .numColumns = 1;
gl .verticalSpacing = 0;
gl .marginHeight = 0;
gl .marginWidth = 0;

Composite c1 = new Composite(parent);
Composite c2 = new Composite(parent);
c1.setLayout(gl);
c2.setLayout(gl);

Personally I always create a new layout for each one of my GUI component. It is in my opinion more easy to read that the piece of code above and because there is a small number of such objects (having a UI with thousands of GUI component needing a layout is bad idea I think ;-)), creating a new one for each component doesn’t impact performances.

To go further in the probleme, reading this code lead me to the following question:

How do I know if a given class instance can be “shared” in several contexts ?

In the particular case of the GridLayout above, I was not able to find the answer in the Javadoc here. After looking at the source code of GridLayout it seems that there is no internal state saved anywhere, and thus it’s possible to use the same layout instance for several components.

May be I missed something or maybe it should be interesting to indicate this information in the Javadoc. For sure, I’ll take care of that when writing the documentation for my own libraries.

Increase memory of Eclipse Ant Runner


I am running the Eclipse ant runner from command line in the following way:

set LAUNCHER=xx\xx\org.eclipse.equinox.launcher_1.1.3.0.jar
xx\xx\java.exe -jar %LAUNCHER% -application org.eclipse.ant.core.antRunner -buildfile build.xml

I am facing OutOfMemory errors in the heap space. Adding -vmargs -Xmx512m to the command line or and changing the properties of my eclipse installation eclipse.ini file didn’t fixed the issue.

In order to be sure that the vmargs are taken into account I simply wrote the folowing task:

public class DisplayTask extends Task {
@Override
public void execute() throws BuildException {
System.out.println(Runtime.getRuntime().maxMemory() / 1000000);
}
}

and just launch it in my build.xml (using taskedf to define the task). The output is always 66 whatever I put in -vmargs -Xmx ….
After several tests I got the solution by adding -Xmx512m  only (without -vmargs) option just after the java.exe as folowing:

xx\xx\java.exe -Xmx512m -jar %LAUNCHER% -application org.eclipse.ant.core.antRunner -buildfile build.xml

Conclusion: the ant runner application doesn’t read neither the -vmargs command line options nor the eclipse.ini file options. Thus the only solution to increase memory for it is to directly pass the option to the Java virtual machine.

Sequential Jobs


Following this post about RCP application progress report  we have the following use case:

  1. User Action
  2. Start first Job (unknown length)
  3. Wait for first Job to finish and start second Job (known length)
  4. Wait for second Job to finish and start third Job (known length)

We want to show this to the user in the following way:

  1. Have a main “User Action” dialog without global progress bar (because of the 1st job length is unknown and really variable upon executions length I can’t get an accurate total length) or with an “unknown” length
  2. In this dialog have 3 sub parts one for each job with one progress bar for each one of this jobs and off course with IProgressMonitor.UNKNOWN style for the first job.
  3. In this dialog the progress bars will be updated sequentially as the underlying jobs.

This will allow the end user to immediately see that its action is divided into 3 sub-tasks (the sub-tasks are meaningful for end users) and each time a new sub-task is started he can see the length of this sub-task (unknown for the first).

After many searches we were not able to implement that using the Eclipse Job API,  and today we are reporting these 3 sub-tasks as 3 individual successive dialogs with the drawback that the end user may initially thinks that his action will be completed at the end of the first unknown sub-task.

How does the Eclipse workbench’s team and YOU handle such situations ?

Build command line tools on top of Eclipse


Eclipse is great! Isn’t it ?

Recently some of my RCP application customers asked me to have a command line version of the application. This application parses some binary trace files and provides analyses of it which results are displayed in several Eclipse views. The point here was to output “CSV versions” of this views through a command line tool.

Thanks to the modularity of Eclipse it was really easy to add in the following architecture:

a new Command Line tool plugin contributing a command line application through the org.eclipse.core.runtime.applications extension point.

Yes it is as simple as it looks like excepted may be the following tips to remember:

  1. Always remove useless dependencies in all your plugins
  2. Ensure that Activators of plugins not contributing to the workbench extends AbstractPlugin and not AbstractUIPlugin
  3. Always isolate UI code in a dedicated plugin, otherwise it’ll be a nightmare to have a command line tool.
  4. Keep in mind that almost everything can be done with Eclipse ;-)

Manu

 

 

Analogical Computer


Today looking at my desktop books I decided to build the first analogical embeded computer system of the world: from the hardware to the top level user interface software.

What do you think about it ?

20110404-113758.jpg

Unit Testing RCP Applications


Include Unit tests launching and stop in case of failure in a continuous integration process is a MUST. In the case of an Eclipse RCP application continuous environment process (based on ANT and PDE build scripts) set up we have to decide “how to install and launch unit tests“.

Solution: After building the RCP product, install the product using P2 director command line in the Eclipse SDK that has been used to build the product with a specific configuration. Then just launch the PDE unit test application as following:

 <java dir="./plugins" classname="org.eclipse.equinox.launcher.Main" fork="yes" classpathref="equinox.launcher.class.path" maxmemory="512m">
 <arg line="-application org.eclipse.pde.junit.runtime.uitestapplication -port ${port} -testpluginname ${plugin} -classnames ${classes} ${config}" />
 </java>

This solution is great but has a main drawback: We are not running the unit tests in the “final” environment but in an SDK environment that has been enhanced with our RCP product.  A better solution to my eye will be to enhance the RCP product with the few plugins required by PDE unit test application in order to be closest to the final environment when launching the tests.

I’am investigating on that ….. next episode coming soon !

Note: feel free to comment this post to bring your point of view and experiences about RCP unit testing.

How do you report progress in your software ??


In order to improve usability of my RCP application I decided to manage all the “long” operations  using Jobs. This post will introduce several solutions to handle “composite” tasks progress reporting across a concrete example I faced this morning.

Image just to bring your eyes to my post and encourage you to read it, is it working ??

Image just to bring your eyes to my post and encourage you to read it, is it working ?? ;-)

Having already used the Job API I felt confident on the time needed to implement this. I was right about the time needed to use the Job’s API but I was wrong about the way to organize several related operations into several (or one ??) job(s).

Lets describe the context. The user is performing a “File->Open” action and want report on the progress of this action. Progress report is needed because opened files are big and a lot of computation has to be done on these files. This “File->Open” action is composed of 3 sequential “subtasks”: parsing, analyse and display. I am able to easily report accurate progress for each one of these tasks, but unfortunately I am not able then to have an accurate estimation of each task time in the global process composed of the 3 tasks.

Today I have 2 solutions in order to report what is happening behind the scene how many time is happening behind the scene (this IS really what the user cares !!!).

First solution: Use 3 distincts Jobs.

This solution can be easily implemented using IJobChangeListeners. A first Job is created for the parsing tasks. This Job is scheduled and thanks to Job listeners I am able to be notified when it’s completed in order to create and schedule the analyze Job. The same process is apply between the analyze Job and the display Job. This solution present the 3 tasks to the user with an accurate estimation time for EACH ONE of these tasks. As I mentioned before, this is the best I can do because I am not able to estimate each task time in the global process.  Here the new user may think that the global “File Open” action will be completed at the end of the first Job ….??!!!  Another drawback of this solution is that the user is prompted with 3 UI progress dialogs (all my Jobs are user Jobs so there is successively: “Parsing File dialog”, “Analyze File dialog” and “Display Dialog” ) for only one “File->Open” action.

Second solution: Use one main Job and 3 Submonitors inside this Job.

I came to this solution in order to try to fix the second drawback of the first one (3 UI dialogs for oen action). I’ll not step into implementation detail here but lets analyze the result. Why ? We have a unique UI progress dialog NOT accurate: we can clearly distinct the 3 stages stepping each one at a different speed. What is better: only one UI dialog not accurate or 2 “surprise” dialogs that appear after the first accurate one ?

An other solution would be (I didn’t find anyway to implement it for now):

Third solution (not sure this can be done .. any ideas ??): Have one Job with 3 Submonitors inside this Job but reporting the 3 Submonitors as 3 X 100 %. What I mean here is to have only one UI progress dialog called “Opening File” that will be “filled in from 0% to 100%” three times (one for each sub-task). This solution is the same as the first one but it will fix the problem of having 3 separated UI dialogs.

In all of these solution the end user “will” have on the first usage a “wrong” first estimation time …. Thus I would be interested to know how you handle such situations, so feel free to leave comments on this topic.

How drop GEF editors figures in the outside world


In a Gef editor I want to let the users drag and drop figures (== model objects) to an other custom view available in my tool’s perspective.

Adding a DragSource with my own drag transfer on my GEF editor figure canvas allows that. But as a side effect, and I don’t want this side effect, this disable the possibility to move the figures INSIDE the editor using drag and drop.

After investigations I found this post on eclipse forums. The solution is acceptable but not perfect. Thus I investigated deeper and came to the following pure SWT snippet that explains why we have this behavior: MouseMove events (the ones used by gef to support dragging INSIDE the editor) are no more fired once a drag source has been added:

import org.eclipse.swt.dnd.DND;
import org.eclipse.swt.dnd.DragSource;
import org.eclipse.swt.dnd.DragSourceEvent;
import org.eclipse.swt.dnd.DragSourceListener;
import org.eclipse.swt.dnd.FileTransfer;
import org.eclipse.swt.dnd.Transfer;
import org.eclipse.swt.events.MouseEvent;
import org.eclipse.swt.events.MouseListener;
import org.eclipse.swt.events.MouseMoveListener;
import org.eclipse.swt.widgets.Display;
import org.eclipse.swt.widgets.Shell;

public class SwtTest {
public static void main(String[] args) {
final Display display = new Display();
final Shell shell = new Shell(display);

shell.addMouseMoveListener(new MouseMoveListener() {

@Override
public void mouseMove(MouseEvent e) {
System.out.println("Mouse move");
}
});
DragSourceListener dragListener = new DragSourceListener() {

public void dragFinished(DragSourceEvent event) {
System.out.println("dragFinished");

}

public void dragSetData(DragSourceEvent event) {
System.out.println("dragSetData");

}

public void dragStart(DragSourceEvent event) {
System.out.println("dragStart");
}
};

DragSource dragSource = new DragSource(shell, DND.DROP_COPY | DND.DROP_MOVE);
dragSource.addDragListener(dragListener);
dragSource.setTransfer(new Transfer[] { FileTransfer.getInstance() });

shell.pack();
shell.open();
while (!shell.isDisposed()) {
if (!display.readAndDispatch())
display.sleep();
}
display.dispose();
}
}

I guess this is the normal behavior fro man SWT point of view.

As a side note I would be interested for a solution to this issue other that the one proposed on Eclipse forum consisting in activating my DragSource only if a given condition is met such as Shift is pressed (this is done in a DragSourceListener.dragStart method by setting event.doit to false.