Writing Testable JavaScript (written by Ben Cherry)

The engineering culture at Twitter requires tests. Lots of tests. I haven’t had formal experience with JavaScript testing before Twitter, so I’ve been learning a lot as I go. In particular, a number of patterns I used to use, write about, and encourage have turned out to be bad for writing testable code. So I thought it would be worthwhile to share a few of the most important principles I’ve developed for writing testable JavaScript. The examples I provide are based on QUnit, but should be just as applicable to any JavaScript testing framework.

Avoid Singletons

One of my most popular posts was about using JavaScript Module Pattern to create powerful singletons in your application. This approach can be simple and useful, but it creates problems for testing, for one simple reason: singletons suffer state pollution between tests. Rather than creating your singletons as modules, you should compose them as constructable objects, and assign a single, default instance at the global level in your application init.

For example, consider the following singleton module (contrived example, of course):

var dataStore = (function() {
var data = [];
return {
push: function (item) {
data.push(item);
},
pop: function() {
return data.pop();
},
length: function() {
return data.length;
}
};
}());

With this module, we may wish to test the foo.bar method. Here’s a simple QUnit test suite:

module("dataStore");
test("pop", function() {
dataStore.push("foo");
dataStore.push("bar")
equal(dataStore.pop(), "bar", "popping returns the most-recently pushed item");
});

test("length", function() {
dataStore.push("foo");
equal(dataStore.length(), 1, "adding 1 item makes the length 1");
});

When running this test suite, the assertion in the length test will fail, but it’s not clear from looking at it why it should. The problem is that state has been left in dataStore from the previous test. Merely re-ordering these tests will cause the length test to pass, which is a clear red flag that something is wrong. We could fix this with setup or teardown that reverts the state of dataStore, but that means that we need to constantly maintain our test boilerplate as we make implementation changes in the dataStore module. A better approach is the following:

function newDataStore() {
var data = [];
return {
push: function (item) {
data.push(item);
},
pop: function() {
return data.pop();
},
length: function() {
return data.length;
}
};
}

var dataStore = newDataStore();

Now, your test suite will look like this:

module("dataStore");
test("pop", function() {
var dataStore = newDataStore();
dataStore.push("foo");
dataStore.push("bar")
equal(dataStore.pop(), "bar", "popping returns the most-recently pushed item");
});

test("length", function() {
var dataStore = newDataStore();
dataStore.push("foo");
equal(dataStore.length(), 1, "adding 1 item makes the length 1");
});

This allows our global dataStore to behave exactly as it did before, while allowing our tests to avoid polluting each other. Each test owns its own instance of a DataStore object, which will be garbage collected when the test completes.

Avoid Closure-based Privacy

Another pattern I used to promote is real private members in JavaScript. The advantage is that you can keep globally-accessible namespaces free of unnecessary references to private implementation details. However, overuse of this pattern can lead to untestable code. This is because your test suite cannot access, and thus cannot test, private functions hidden in closures. Consider the following:

function Templater() {
function supplant(str, params) {
for (var prop in params) {
str.split("{" + prop +"}").join(params[prop]);
}
return str;
}

var templates = {};

this.defineTemplate = function(name, template) {
templates[name] = template;
};

this.render = function(name, params) {
if (typeof templates[name] !== "string") {
throw "Template " + name + " not found!";
}

return supplant(templates[name], params);
};
}

The crucial method for our Templater object is supplant, but we cannot access it from outside the closure of the constructor. Thus, a testing suite like QUnit cannot hope to verify that it works as intended. In addition, we cannot verify that our defineTemplate method does anything without trying a .render() call on the template and watching for an exception. We could simply add a getTemplate() method, but then we’d be adding methods to the public interface solely to allow testing, which is not a good approach. While the issues here are probably just fine in this simple example, building complex objects with important private methods will lead to relying on untestable code, which is a red flag. Here’s a testable version of the above:

function Templater() {
this._templates = {};
}

Templater.prototype = {
_supplant: function(str, params) {
for (var prop in params) {
str.split("{" + prop +"}").join(params[prop]);
}
return str;
},
render: function(name, params) {
if (typeof this._templates[name] !== "string") {
throw "Template " + name + " not found!";
}

return this._supplant(this._templates[name], params);
},
defineTemplate: function(name, template) {
this._templates[name] = template;
}
};

And here’s a QUnit test suite for it:

module("Templater");
test("_supplant", function() {
var templater = new Templater();
equal(templater._supplant("{foo}", {foo: "bar"}), "bar"))
equal(templater._supplant("foo {bar}", {bar: "baz"}), "foo baz"));
});

test("defineTemplate", function() {
var templater = new Templater();
templater.defineTemplate("foo", "{foo}");
equal(template._templates.foo, "{foo}");
});

test("render", function() {
var templater = new Templater();
templater.defineTemplate("hello", "hello {world}!");
equal(templater.render("hello", {world: "internet"}), "hello internet!");
});

Notice that our test for render is really just a test that defineTemplate and supplant integrate correctly with each other. We’ve already tested those methods in isolation, which will allow us to easily discover which components are really breaking when tests of the render method fail.

Write Tight Functions

Tight functions are important in any language, but JavaScript presents its own reasons to do so. Much of what you do with JavaScript is done against global singletons provided by the environment, and which your test suite relies on. For instance, testing a URL re-writer will be difficult if all of your methods try to assign window.location. Instead, you should break your system into its logical components that decide what to do, then write short functions that actually do it. You can test the logical functions with various inputs and outputs, and leave the final function that modifies window.location untested. Provided you’ve composed your system correctly, this should be safe.

Here’s an example URL rewriter that is not testable:

function redirectTo(url) {
if (url.charAt(0) === "#") {
window.location.hash = url;
} else if (url.charAt(0) === "/") {
window.location.pathname = url;
} else {
window.location.href = url;
}
}

The logic in this example is relatively simple, but we can imagine a more complex redirecter. As complexity grows, we will not be able to test this method without causing the window to redirect, thus leaving our test suite entirely.

Here’s a testable version:

function _getRedirectPart(url) {
if (url.charAt(0) === "#") {
return "hash";
} else if (url.charAt(0) === "/") {
return "pathname";
} else {
return "href";
}
}

function redirectTo(url) {
window.location[_getRedirectPart(url)] = url;
}

And now we can write a simple test suite for _getRedirectPart:

test("_getRedirectPart", function() {
equal(_getRedirectPart("#foo"), "hash");
equal(_getRedirectPart("/foo"), "pathname");
equal(_getRedirectPart("http://foo.com"), "href");
});

Now the meat of redirectTo has been tested, and we don’t have to worry about accidentally redirecting out of our test suite.

__Note__: There is an alternative solution, which is to create a `performRedirect` function that does the location change, but stub that out in your test suite. This is a common practice for many, but I’ve been trying to avoid method stubbing. I find basic QUnit to work well in all situations I’ve found so far, and would prefer to not have to remember to stub out a method like that for my tests, but your case may differ.

Write Lots of Tests

This is a no-brainer, but it’s important to remember. Many programmers write too few tests because writing tests is hard, or lots of work. I suffer from this problem all the time, so I threw together a little helper for QUnit that makes writing lots of tests a lot easier. It’s a function called testCases which you call within a test block, passing a function, calling context, and array of inputs/outputs to try and compare. You can quickly build up a robust suite for your input/output functions for rigorous testing.

function testCases(fn, context, tests) {
for (var i = 0; i < tests.length; i++) {
same(fn.apply(context, tests[i][0]), tests[i][1],
tests[i][2] || JSON.stringify(tests[i]));
}
}

And here’s a simple example use:

test("foo", function() {
testCases(foo, null, [
[["bar", "baz"], "barbaz"],
[["bar", "bar"], "barbar", "a passing test"]
]);
});

Conclusions

There is plenty more to write about testable JavaScript, and I’m sure there are many good books, but I hope this was a good overview of practical cases I encounter on a daily basis. I’m by no means a testing expert, so please let me know if I’ve made mistakes or given bad advice.

JavaScript Module Pattern: In-Depth (written by Ben Cherry)

The module pattern is a common JavaScript coding pattern. It’s generally well understood, but there are a number of advanced uses that have not gotten a lot of attention. In this article, I’ll review the basics and cover some truly remarkable advanced topics, including one which I think is original.

The Basics

We’ll start out with a simple overview of the module pattern, which has been well-known since Eric Miraglia (of YUI) first blogged about it three years ago. If you’re already familiar with the module pattern, feel free to skip ahead to “Advanced Patterns”.

Anonymous Closures

This is the fundamental construct that makes it all possible, and really is the single best feature of JavaScript. We’ll simply create an anonymous function, and execute it immediately. All of the code that runs inside the function lives in a closure, which provides privacy and state throughout the lifetime of our application.

(function () {
	// ... all vars and functions are in this scope only
	// still maintains access to all globals
}());

Notice the () around the anonymous function. This is required by the language, since statements that begin with the token function are always considered to be function declarations. Including () creates a function expression instead.

Global Import

JavaScript has a feature known as implied globals. Whenever a name is used, the interpreter walks the scope chain backwards looking for a var statement for that name. If none is found, that variable is assumed to be global. If it’s used in an assignment, the global is created if it doesn’t already exist. This means that using or creating global variables in an anonymous closure is easy. Unfortunately, this leads to hard-to-manage code, as it’s not obvious (to humans) which variables are global in a given file.

Luckily, our anonymous function provides an easy alternative. By passing globals as parameters to our anonymous function, we import them into our code, which is both clearer and faster than implied globals. Here’s an example:

(function ($, YAHOO) {
	// now have access to globals jQuery (as $) and YAHOO in this code
}(jQuery, YAHOO));

Module Export

Sometimes you don’t just want to use globals, but you want to declare them. We can easily do this by exporting them, using the anonymous function’s return value. Doing so will complete the basic module pattern, so here’s a complete example:

var MODULE = (function () {
	var my = {},
		privateVariable = 1;

	function privateMethod() {
		// ...
	}

	my.moduleProperty = 1;
	my.moduleMethod = function () {
		// ...
	};

	return my;
}());

Notice that we’ve declared a global module named MODULE, with two public properties: a method named MODULE.moduleMethod and a variable named MODULE.moduleProperty. In addition, it maintains private internal state using the closure of the anonymous function. Also, we can easily import needed globals, using the pattern we learned above.

Advanced Patterns

While the above is enough for many uses, we can take this pattern farther and create some very powerful, extensible constructs. Lets work through them one-by-one, continuing with our module named MODULE.

Augmentation

One limitation of the module pattern so far is that the entire module must be in one file. Anyone who has worked in a large code-base understands the value of splitting among multiple files. Luckily, we have a nice solution to augment modules. First, we import the module, then we add properties, then we export it. Here’s an example, augmenting our MODULE from above:

var MODULE = (function (my) {
	my.anotherMethod = function () {
		// added method...
	};

	return my;
}(MODULE));

We use the var keyword again for consistency, even though it’s not necessary. After this code has run, our module will have gained a new public method named MODULE.anotherMethod. This augmentation file will also maintain its own private internal state and imports.

Loose Augmentation

While our example above requires our initial module creation to be first, and the augmentation to happen second, that isn’t always necessary. One of the best things a JavaScript application can do for performance is to load scripts asynchronously. We can create flexible multi-part modules that can load themselves in any order with loose augmentation. Each file should have the following structure:

var MODULE = (function (my) {
	// add capabilities...

	return my;
}(MODULE || {}));

In this pattern, the var statement is always necessary. Note that the import will create the module if it does not already exist. This means you can use a tool like LABjs and load all of your module files in parallel, without needing to block.

Tight Augmentation

While loose augmentation is great, it does place some limitations on your module. Most importantly, you cannot override module properties safely. You also cannot use module properties from other files during initialization (but you can at run-time after intialization). Tight augmentation implies a set loading order, but allows overrides. Here is a simple example (augmenting our original MODULE):

var MODULE = (function (my) {
	var old_moduleMethod = my.moduleMethod;

	my.moduleMethod = function () {
		// method override, has access to old through old_moduleMethod...
	};

	return my;
}(MODULE));

Here we’ve overridden MODULE.moduleMethod, but maintain a reference to the original method, if needed.

Cloning and Inheritance

var MODULE_TWO = (function (old) {
	var my = {},
		key;

	for (key in old) {
		if (old.hasOwnProperty(key)) {
			my[key] = old[key];
		}
	}

	var super_moduleMethod = old.moduleMethod;
	my.moduleMethod = function () {
		// override method on the clone, access to super through super_moduleMethod
	};

	return my;
}(MODULE));

This pattern is perhaps the least flexible option. It does allow some neat compositions, but that comes at the expense of flexibility. As I’ve written it, properties which are objects or functions will not be duplicated, they will exist as one object with two references. Changing one will change the other. This could be fixed for objects with a recursive cloning process, but probably cannot be fixed for functions, except perhaps with eval. Nevertheless, I’ve included it for completeness.

Cross-File Private State

One severe limitation of splitting a module across multiple files is that each file maintains its own private state, and does not get access to the private state of the other files. This can be fixed. Here is an example of a loosely augmented module that will maintain private state across all augmentations:

var MODULE = (function (my) {
	var _private = my._private = my._private || {},
		_seal = my._seal = my._seal || function () {
			delete my._private;
			delete my._seal;
			delete my._unseal;
		},
		_unseal = my._unseal = my._unseal || function () {
			my._private = _private;
			my._seal = _seal;
			my._unseal = _unseal;
		};

	// permanent access to _private, _seal, and _unseal

	return my;
}(MODULE || {}));

Any file can set properties on their local variable _private, and it will be immediately available to the others. Once this module has loaded completely, the application should call MODULE._seal(), which will prevent external access to the internal _private. If this module were to be augmented again, further in the application’s lifetime, one of the internal methods, in any file, can call _unseal() before loading the new file, and call _seal() again after it has been executed. This pattern occurred to me today while I was at work, I have not seen this elsewhere. I think this is a very useful pattern, and would have been worth writing about all on its own.

Sub-modules

Our final advanced pattern is actually the simplest. There are many good cases for creating sub-modules. It is just like creating regular modules:

MODULE.sub = (function () {
	var my = {};
	// ...

	return my;
}());

While this may have been obvious, I thought it worth including. Sub-modules have all the advanced capabilities of normal modules, including augmentation and private state.

Conclusions

Most of the advanced patterns can be combined with each other to create more useful patterns. If I had to advocate a route to take in designing a complex application, I’d combine loose augmentation, private state, and sub-modules.

I haven’t touched on performance here at all, but I’d like to put in one quick note: The module pattern is good for performance. It minifies really well, which makes downloading the code faster. Using loose augmentation allows easy non-blocking parallel downloads, which also speeds up download speeds. Initialization time is probably a bit slower than other methods, but worth the trade-off. Run-time performance should suffer no penalties so long as globals are imported correctly, and will probably gain speed in sub-modules by shortening the reference chain with local variables.

To close, here’s an example of a sub-module that loads itself dynamically to its parent (creating it if it does not exist). I’ve left out private state for brevity, but including it would be simple. This code pattern allows an entire complex heirarchical code-base to be loaded completely in parallel with itself, sub-modules and all.

var UTIL = (function (parent, $) {
	var my = parent.ajax = parent.ajax || {};

	my.get = function (url, params, callback) {
		// ok, so I'm cheating a bit :)
		return $.getJSON(url, params, callback);
	};

	// etc...

	return parent;
}(UTIL || {}, jQuery));

I hope this has been useful, and please leave a comment to share your thoughts. Now, go forth and write better, more modular JavaScript!

Equinox p2 Repository Mirroring

Equinox p2 manages all of its data in repositories. There are two types of repos, artifact and metadata.

The Repository Mirroring applications can be used to mirror artifact and metadata repositories. In addition, users can do selective mirroring of artifacts or metadata either to create a more specific mirror (e.g. only mirror latest code) or merge content into an existing mirror.

Mirroring Metadata

eclipse -nosplash -verbose -application org.eclipse.equinox.p2.metadata.repository.mirrorApplication -source Insert Source URL (e.g. http://download.eclipse.org/eclipse/updates/3.4milestones/) -destination Insert Destination URL (e.g. file:/tmp/3.4milestonesMirror/)

Mirroring Artifacts

eclipse -nosplash -verbose -application org.eclipse.equinox.p2.artifact.repository.mirrorApplication -source Insert Source URL (e.g. http://download.eclipse.org/eclipse/updates/3.4milestones/) -destination Insert Destination URL (e.g. file:/tmp/3.4milestonesMirror/)

Render Charts in Jasper HTML using Spring

This is a solution for which I worked for 2 full days searching Google and trying it out by trial and error. The main issue that I was facing was that the charts in my report won’t render in the HTML report, while it was rendering properly in the PDF version.
I would assume that you know how to setup Jasper using Spring. You have to do 4 things to make the graphs appear in your HTML.
1. A new servlet mapping in your web.xml

<servlet>
<servlet-name>JasperImageServlet</servlet-name>
<servlet-class>net.sf.jasperreports.j2ee.servlets.ImageServlet</servlet-class>
</servlet>

<servlet-mapping>
<servlet-name>JasperImageServlet</servlet-name>
<url-pattern>/image/*</url-pattern>
</servlet-mapping>

2. Set the IMAGES_URI parameter in your jasper-view.xml (file where you define your jasper views). The URI path should match the servlet path you defined in web.xml

<util:map id="exportParameterMap">
<entry key="net.sf.jasperreports.engine.export.JRHtmlExporterParameter.IS_USING_IMAGES_TO_ALIGN">
<value>false</value>
</entry>
<entry key="net.sf.jasperreports.engine.export.JRHtmlExporterParameter.IS_WHITE_PAGE_BACKGROUND">
<value>false</value>
</entry>
<entry key="net.sf.jasperreports.engine.export.JRHtmlExporterParameter.IMAGES_URI">
<value>image?image=</value>
</entry>
</util:map>

<bean id="roleUsageReport"
class="com.jerry.util.SpReportsMultiFormatView"
p:url="/resources/shared/reports/usageAnalytics.jasper"
p:reportDataKey="datasource"
p:exporterParameters-ref="exportParameterMap"/>

3. Extend the JasperReportsMultiFormatView and override the renderReport method as shown below. In this case, I named it com.jerry.util.SpReportsMultiFormatView as you can see in the above xml.

public class SpReportsMultiFormatView extends JasperReportsMultiFormatView {

@Override
protected void renderReport(JasperPrint populatedReport,
Map<String, Object> model, HttpServletResponse response)
throws Exception {
if (model.containsKey("requestObject")) {
HttpServletRequest request = (HttpServletRequest) model.get("requestObject");
request.getSession().setAttribute(ImageServlet.DEFAULT_JASPER_PRINT_SESSION_ATTRIBUTE, populatedReport);
}
super.renderReport(populatedReport, model, response);
}

}

4. In your spring controller, set the ‘requestObject’ in your model.

JRDataSource JRdataSource = new JRBeanCollectionDataSource(usageList);
Map<String,Object> parameterMap = new HashMap<String,Object>();
parameterMap.put("format", "html");
parameterMap.put("requestObject", req);
parameterMap.put("datasource", JRdataSource);
return new ModelAndView("roleUsageReport", parameterMap);

This changes should make the graphs/images render properly in your HTML report.

Oracle – How to find and close connections

Find the users connected to Oracle

SELECT
substr(a.spid,1,9) pid,
substr(b.sid,1,5) sid,
substr(b.serial#,1,5) ser#,
substr(b.machine,1,6) box,
substr(b.username,1,10) username,
substr(b.osuser,1,8) os_user,
substr(b.program,1,30) program
FROM v$session b, v$process a
WHERE b.paddr = a.addr
AND type='USER'
ORDER BY spid;

close a specific connection using the above sid and ser#

alter system disconnect session 'sid,ser#' immediate;

Related PL/SQL that I wrote to accomplish this task:

DECLARE
tmp_sidserial VARCHAR2(15) := '';
CURSOR cur_user
IS
SELECT b.sid, b.serial#
FROM v$session b
WHERE TYPE = 'USER' AND b.username = 'JJACOB'
ORDER BY sid;
BEGIN
FOR user_rec in cur_user
LOOP
tmp_sidserial := user_rec.sid || ',' || user_rec.serial#  ;
EXECUTE IMMEDIATE 'ALTER SYSTEM DISCONNECT SESSION ''' || tmp_sidserial || ''' IMMEDIATE';
END LOOP;
END;
/

Using CMIS(Content Management Interoperability Services) API with Alfresco ECM

Start Alfresco and go to the URL, http://localhost:8080/alfresco/service/cmis/index.html

Expand the CMIS Repository Information to find the Repository ID

Use the code below to connect to Alfresco.

private Session connectAlfresco() {

final String ALFRSCO_CMIS_URL = "http://localhost:8080/alfresco/cmisatom";
final String REPOSITORY_ID = "ab3cd777-9999-5555-55a5-a55cc7777777";

final Map<String, String> parameters = new HashMap<String, String>();
parameters.put(SessionParameter.BINDING_TYPE,BindingType.ATOMPUB.value());
parameters.put(SessionParameter.ATOMPUB_URL, ALFRSCO_CMIS_URL);
parameters.put(SessionParameter.REPOSITORY_ID, REPOSITORY_ID);
parameters.put(SessionParameter.USER, "admin");
parameters.put(SessionParameter.PASSWORD, "admin");

// create the session
final Session session = SessionFactoryImpl.newInstance().createSession(
parameters);
return session;
}

Now you can create documents, update documents, delete them, read a file or query for documents. Doing all of them at-once doesn’t make any sense; but the idea is to show how the API can be used.

public static void main(final String[] args) {

final static String FOLDER_PATH = "/Jerry's Folders";
final AlfrescoServiceMediator md = new AlfrescoServiceMediator();
final Session session = md.connectAlfresco();

// get repository info
final RepositoryInfo repInfo = session.getRepositoryInfo();
System.out.println("Repository name: " + repInfo.getName());

AlfrescoServiceMediator.browseFolder(session.getRootFolder());

final String filename = "newFile.txt";
final CmisObject folder = session.getObjectByPath(AlfrescoServiceMediator.FOLDER_PATH);

final byte[] content = "This is a new File and it got updated now!!!!!" .getBytes();

// CREATE A DOCUMENT
md.createCmisDocument(folder, filename, content);

// UPDATE A DOCUMENT
AlfrescoServiceMediator.updateCmisDocument(session, filename, content);

// DELETE DOCUMENT

final CmisObject delfile = session.getObjectByPath(AlfrescoServiceMediator.FOLDER_PATH + "/jerrysDocument3.txt");
AlfrescoServiceMediator.deleteCmisDocument(delfile);

// READ FILE
System.out.println(md.readCmisFile(session, filename));

// QUERY
final ItemIterable<QueryResult> results = session.query(
"SELECT * FROM cmis:document", false);

for (final QueryResult hit : results) {
for (final PropertyData<?> property : hit.getProperties()) {
final String queryName = property.getQueryName();
final Object value = property.getFirstValue();
System.out.println(queryName + ": " + value);
}
}
}

private void createCmisDocument(final CmisObject cmisFolder,final String filename, final byte[] content) {
final Folder parent = (Folder) cmisFolder;

// properties
// (minimal set: name and object type id)
final Map<String, Object> properties = new HashMap<String, Object>();
properties.put(PropertyIds.OBJECT_TYPE_ID, "ui:claimdocs");
properties.put(PropertyIds.NAME, filename);
properties.put("ui:claimDocumentType", "ClaimsPartialState");
properties.put("ui:applicationName", "ClaimWebApp");
// content
final InputStream stream = new ByteArrayInputStream(content);
final ContentStream contentStream = new ContentStreamImpl(filename,BigInteger.valueOf(content.length), "text/plain", stream);
// create a major version
final Document newDoc = parent.createDocument(properties,contentStream, VersioningState.MAJOR);
}

private void updateCmisDocument(final Session session,final String filename, final byte[] content) {

final CmisObject cmisFile = session.getObjectByPath(AlfrescoServiceMediator.FOLDER_PATH + "/"+ filename);
Document docFile = (Document) cmisFile;
docFile = (Document) session.getObject(docFile.checkOut());
final InputStream filestream = new ByteArrayInputStream(content);
final ContentStream filecontentStream = new ContentStreamImpl(filename,
BigInteger.valueOf(content.length), "text/plain", filestream);
docFile.checkIn(false, null, filecontentStream, "just a minor change");

}

private void deleteCmisDocument(final CmisObject cmisFile) {

final Document delFile = (Document) cmisFile;
delFile.delete(true);
}

private Folder createFolder(final Folder target, final String newFolderName) {
final Map<String, String> props = new HashMap<String, String>();
props.put(PropertyIds.OBJECT_TYPE_ID, "cmis:folder");
props.put(PropertyIds.NAME, newFolderName);
final Folder newFolder = target.createFolder(props);
return newFolder;
}

private String readCmisFile(final Session session, final String filename) {

final CmisObject cmisFile = session.getObjectByPath(AlfrescoServiceMediator.FOLDER_PATH + "/"+ filename);
final Document document = (Document) cmisFile;
final InputStream stream = document.getContentStream().getStream();
return this.getStringFromInputStream(stream);
}

// convert InputStream to String
private String getStringFromInputStream(final InputStream is) {

BufferedReader br = null;
final StringBuilder sb = new StringBuilder();
String line;
try {
br = new BufferedReader(new InputStreamReader(is));
while ((line = br.readLine()) != null) {
sb.append(line);
}
} catch (final IOException e) {
e.printStackTrace();
} finally {
if (br != null) {
try {
br.close();
} catch (final IOException e) {
e.printStackTrace();
}
}
}
return sb.toString();
}

private void browseFolder(final Folder rootFolder) {

final ItemIterable<CmisObject> children = rootFolder.getChildren();
for (final CmisObject object : children) {
System.out.println("---------------------------------");
System.out.println(" Id: " + object.getId());
System.out.println(" Name: " + object.getName());
System.out.println(" Base Type: " + object.getBaseTypeId());
System.out.println(" Property 'bla': " + object.getPropertyValue("bla"));

final ObjectType type = object.getType();
System.out.println(" Type Id: " + type.getId());
System.out.println(" Type Name: " + type.getDisplayName());
System.out.println(" Type Query Name: " + type.getQueryName());

final AllowableActions actions = object.getAllowableActions();
System.out.println(" canGetProperties: " + actions.getAllowableActions().contains(Action.CAN_GET_PROPERTIES));
System.out.println(" canDeleteObject: " + actions.getAllowableActions().contains(Action.CAN_DELETE_OBJECT));
}
}

Cheap & Cost Efficient Home Server

Heard about pogoplug? Pogoplug is a multimedia sharing device that lets you connect any external hard drive and then access and share your content over the internet. Basically this is a plug computer powered by an ARM processor. The model that I purchased, POGO-B01, is a 700 MHz Dual core processor with 128MB RAM.  There are 4 USB ports on this device and one Ethernet port. Since it is a plug computer, it uses very less electricity. I got it from J&R for less than $20.

I attached an external hard disk and used it for a couple of weeks, having my own pogoplug “storage cloud”. But pogoplug software had its own limitations while using it in the internal network. I decided to delete all the build in features and install archlinux OS and the tools that I need for my environment.

One of my main requirement was to have a common place to store all my files and share them easily between my computers in the house. I installed the Samba package and was able to setup a network attached storage (NAS). Now I wanted these files, mainly music files, pictures and videos, to be able to be accessed via my DLNA enabled Bluray player/ Television/ XBox.  This was achieved by installing the MiniDLNA Server. This was also a success except that the  client devices did not recognize/ encode some of those video files. I am still in search of some package that would work on this pogoplug device which can do on the fly transcoding .

You can also install “Cherokee” which is a very light weight Web Server, “Tor” which is an anonymous proxy network (Using the proxy is going to make your browsing experience very slow), “Flexget” which is a RSS torrent down loader…etc

You can get all the information regarding the whole procedure and step by step instructions from the website, http://archlinuxarm.org.

If you have any cool ideas on improving this archlinux home server, please let me know…….Thanks for reading.

MongoDB with SpringData

Here is a simple example to show how easy to write a CRUD Java application using SpringData for MongoDB

First of all install and start MongoDB. You need only 3 files to get things going; The domain object, CRUDInterface and CRUDImplementation
Make sure you have the required mongo and spring-data-mongodb jar files in the classpath.

The domain object

package jerry.jacob.mongo.test;

import org.springframework.data.annotation.Id;
import org.springframework.data.mongodb.core.mapping.Document;

@Document
public class Employee {
	@Id
	private String id;
	private String firstName;
	private String lastName;
	private int age;

	public Employee(String id, String firstName, String lastName,int age) {
		this.id = id;
		this.firstName = firstName;
		this.lastName = lastName;
                this.age = age;
	}

	public String getId() {
		return id;
	}

	public void setId(String id) {
		this.id = id;
	}

	public String getFirstName() {
		return firstName;
	}

	public void setFirstName(String firstName) {
		this.firstName = firstName;
	}

	public String getLastName() {
		return lastName;
	}

	public void setLastName(String lastName) {
		this.lastName = lastName;
	}

	public int getAge() {
		return age;
	}

	public void setAge(int age) {
		this.age = age;
	}

	@Override
	public String toString() {
		return "Employee [id=" + id + ", first name=" + firstName + ", last name=" + lastName + ", age=" + age + "]";
	}
}

CRUDInterface

package jerry.jacob.mongo.test;

import java.util.List;
import com.mongodb.WriteResult;

public interface CRUDInterface {
	public List getAllObjects();

	public void saveObject(O object);

	public O getObject(String id);

	public WriteResult updateObject(String id, String name);

	public void deleteObject(String id);

	public void createCollection();

	public void dropCollection();
}

CRUDImplementation

package jerry.jacob.mongo.test;

import java.util.List;
import org.springframework.data.mongodb.core.MongoTemplate;
import org.springframework.data.mongodb.core.query.Criteria;
import org.springframework.data.mongodb.core.query.Query;
import org.springframework.data.mongodb.core.query.Update;
import com.mongodb.WriteResult;

public class CRUDImplementation implements CRUDInterface {
	MongoTemplate mongoTemplate;

	public void setMongoTemplate(MongoTemplate mongoTemplate) {
		this.mongoTemplate = mongoTemplate;
	}

	/**
	 * Get all Employees.
	 */
	public List getAllObjects() {
		return mongoTemplate.findAll(Employee.class);
	}

	/**
	 * Saves an Employee.
	 */
	public void saveObject(Employee employee) {
		mongoTemplate.insert(employee);
	}

	/**
	 * Gets an Employee for a particular id.
	 */
	public Employee getObject(String id) {
		return mongoTemplate.findOne(new Query(Criteria.where("id").is(id)),
				Employee.class);
	}

	/**
	 * Updates an Employee name for a particular id.
	 */
	public WriteResult updateObject(String id, String lastName) {
		return mongoTemplate.updateFirst(
				new Query(Criteria.where("id").is(id)),
				Update.update("lastName", lastName), Employee.class);
	}

	/**
	 * Delete an Employee for a particular id.
	 */
	public void deleteObject(String id) {
		mongoTemplate
				.remove(new Query(Criteria.where("id").is(id)), Employee.class);
	}

	/**
	 * Create a Employee collection if the collection does not already exists
	 */
	public void createCollection() {
		if (!mongoTemplate.collectionExists(Employee.class)) {
			mongoTemplate.createCollection(Employee.class);
		}
	}

	/**
	 * Drops the Employee collection if the collection does already exists
	 */
	public void dropCollection() {
		if (mongoTemplate.collectionExists(Employee.class)) {
			mongoTemplate.dropCollection(Employee.class);
		}
	}
}

Spring applicationContext.xml

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
	xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:context="http://www.springframework.org/schema/context"
	xsi:schemaLocation="http://www.springframework.org/schema/beans
        http://www.springframework.org/schema/beans/spring-beans-3.0.xsd
        http://www.springframework.org/schema/context
        http://www.springframework.org/schema/context/spring-context-3.0.xsd">

	<bean id="employeeRepository"
		class="jerry.jacob.mongo.test.CRUDImplementation">
		<property name="mongoTemplate" ref="mongoTemplate" />
	</bean>

	<bean id="mongoTemplate" class="org.springframework.data.mongodb.core.MongoTemplate">
		<constructor-arg name="mongo" ref="mongo" />
		<constructor-arg name="databaseName" value="nature" />
	</bean>

	<!-- Factory bean that creates the Mongo instance -->
	<bean id="mongo" class="org.springframework.data.mongodb.core.MongoFactoryBean">
		<property name="host" value="localhost" />
		<property name="port" value="27017" />
	</bean>

	<!-- Activate annotation configured components -->
	<context:annotation-config />

	<!-- Scan components for annotations within the configured package -->
	<context:component-scan base-package="jerry.jacob.mongo.test">
		<context:exclude-filter type="annotation"
			expression="org.springframework.context.annotation.Configuration" />
	</context:component-scan>
</beans>

Finally Test it

      public static void main(String[] args) {

		ConfigurableApplicationContext context = new ClassPathXmlApplicationContext(
				"classpath:/applicationContext.xml");

		Repository repository = context.getBean(CRUDImplementation.class);

		// cleanup collection before insertion
		repository.dropCollection();

		// create collection
		repository.createCollection();

		repository.saveObject(new Employee("1", "Jerry", "Jacob", 30));
	}

LDAP(Active Directory) based authentication in JBoss Seam

Method 1:

Modify your components.xml and add security:identity-manager element and security:ldap-identity-store element.  Authenticator.Authenticate method WILL get called when your user logs in.


<security:rule-based-permission-resolver security-rules="#{securityRules}"/>

<security:identity authenticate-method="#{authenticator.authenticate}" remember-me="true"/>

<security:identity-manager identity-store="#{activeDirectory}"/>

<security:ldap-identity-store name="activeDirectory"
server-address="subdomain.active.directory.org"
server-port="389"
bind-DN="cn=Jacob Jerry,ou=Users,ou=Company,dc=ia,dc=com"
bind-credentials="password"
user-name-attribute="sAMAccountName"
first-name-attribute="givenName"
last-name-attribute="sn"
user-DN-prefix=""
user-DN-suffix="ia.com"
user-context-DN="OU=Users,ou=Company,dc=ia,dc=com"
role-context-DN="OU=Groups,ou=Company,dc=ia,dc=com"
user-role-attribute="memberOf"
role-name-attribute="sAMAccountName"
user-object-classes="person,user,organizationalPerson"
role-object-classes="group"/>

Inject IdentityManager in Authenticator.java
@In(“#{identityManager}”)
IdentityManager identMgr;

and authenticate the user

identMgr.authenticate( credentials.getUsername()+”@”, credentials.getPassword() ));

Why I appended ‘@’ after the username? The user-DN-suffix in my case is actually @ia.com. But it was giving an error in components.xml.

 

Method 2:

Modify your components.xml and add jaas-config-name attribute to security:identity element.  Normally if you add this attribute authenticate-method attribute will have no effect. But adding a post authenticate event listener action could resolve this issue. Authenticator.Authenticate method WILL get called if you have the action listener specified in here.


<security:identity remember-me="true" jaas-config-name="activeDirectory"/>

<event type="org.jboss.seam.security.postAuthenticate">
<action execute="#{authenticator.authenticate}"/>
</event>

In this method, you will have to define your application policy in the server login-config.xml. You may find your local JBoss login-config.xml in the deployment folder \jboss-eap\jboss-as\server\default\conf. Here is my entry to this file

<application-policy name="activeDirectory">
    <authentication>
        <login-module code="org.jboss.security.auth.spi.LdapExtLoginModule" flag="required" >
            <module-option name="java.naming.provider.url">ldap://subdomain.active.directory.org:389</module-option>
            <module-option name="bindDN">cn=Jacob\, Jerry,OU=Users,ou=Company,dc=ia,dc=com</module-option>
            <module-option name="bindCredential">password</module-option>
            <module-option name="baseCtxDN">OU=Users,ou=Company,dc=ia,dc=com</module-option>
            <module-option name="baseFilter">(sAMAccountName={0})</module-option>
            <module-option name="rolesCtxDN">OU=Groups,ou=Company,dc=ia,dc=com</module-option>
            <module-option name="roleFilter">(sAMAccountName={0})</module-option>
            <module-option name="roleAttributeID">memberOf</module-option>
            <module-option name="roleAttributeIsDN">true</module-option>
            <module-option name="roleNameAttributeID">cn</module-option>
            <module-option name="searchScope">ONELEVEL_SCOPE</module-option>
            <module-option name="allowEmptyPasswords">false</module-option>
        </login-module>
    </authentication>
</application-policy>