You are exploring: VU > Library > Blogs > Falvey Memorial Library Blog

Moving VuFind to Zend Framework 2: Part 3 — Theme Inheritance

One of VuFind’s most important features is its theme inheritance system, which allows users to customize the interface by creating sub-themes that only override the templates that need to be changed. This helps isolate user changes from the core code and simplifies upgrades.

The Zend Framework 1 Solution

Since theme inheritance is such a core feature of VuFind, it was the first challenge I tackled when adapting the code to Zend Framework. Fortunately, the list8d project had already solved the problem for me and documented it in a very helpful blog post, so I was able to implement the feature quickly. Although VuFind’s implementation adds some features and changes a few details, it hasn’t strayed too far from the original list8d code.

Differences from VuFind 1.x

The biggest difference between VuFind 1.x themes and the list8d solution is that in VuFind 1.x, you had to create a comma-separated list of themes in the configuration file to specify how inheritance worked. In VuFind 2, with the list8d-inspired system, inheritance is controlled by a “theme.ini” file within each theme which tells VuFind whether or not the theme has a parent. The VuFind 2 approach is preferable for two reasons: it makes the config file more concise and easier to understand, and it prevents users from creating illegal inheritance chains by entering invalid comma-separated sequences.

Moving to Zend Framework 2

Now that I am moving from Zend Framework 1 to Zend Framework 2, I have again started by tackling the theme problem. Fortunately, the list8d solution still works, though it requires a few significant adaptations. The remainder of the article will highlight key changes. All of my code is available through Git on VuFind’s Sourceforge project; feel free to borrow anything that you find useful.

Change 1: Exposing Public Resources

The list8d article talks about creating a link under the public webroot to expose theme resources. I instead opted to handle this through Apache configuration:

AliasMatch ^/vufind/themes/([0-9a-zA-Z-_]*)/css/(.*)$
AliasMatch ^/vufind/themes/([0-9a-zA-Z-_]*)/images/(.*)$
AliasMatch ^/vufind/themes/([0-9a-zA-Z-_]*)/js/(.*)$ 

Order allow,deny
allow from all
AllowOverride All

(Some lines wrapped and indented for clearer display)

Through the magic of regular expressions, this ensures that only Javascript, CSS and images are exposed to the public, while other theme elements (like PHP templates) remain private. So, for example, the styles.css stylesheet in /usr/local/vufind/themes/vufind/blueprint/css becomes visible at the URL http://[your-server]/vufind/themes/blueprint/css/styles.css.

Note that the actual file path to the themes may be subject to change. I’m still debating whether themes belong inside or outside the VuFind-specific module (right now I chose outside, since this allows multiple modules to share the same Apache mappings) and whether or not the themes folder needs to be broken into subdirectories for disambiguation (that accounts for the current redundant “vufind” in the path, but I might decide to eliminate it for simplicity at risk of clashing with other modules). Feedback is welcome.

Also note that VuFind comes with an install script that automatically customizes the Apache configuration to adjust VuFind’s base URL and installed path, so you don’t actually have to edit all of this stuff by hand if you use non-default settings.

Change 2: Initializing Themes

The list8d solution proposes setting up themes by implementing a base controller that all other controllers inherit from. This controller’s init() method is then responsible for reading in the theme.ini file and setting everything up (which mostly consists of manipulating the framework’s search paths so it finds the appropriate templates and helpers in the appropriate places).

When I adapted this for my initial ZF1-based VuFind prototype, I tried to make it more stand-alone by creating a Zend Controller Plug-in to do the work rather than embedding it in a base class… but this didn’t really change anything significantly; it just moved the logic from one somewhat obscure place to a different somewhat obscure place.

Fortunately, Zend Framework 2 has a more comprehensible event-driven architecture for plugging things into the workflow. Rather than using base classes or weird plug-ins, you can hook events from a module’s bootstrap method. This allows much better separation of concerns: I was able to create a VuFindThemeInitializer class which does the actual theme startup, and then I attach different methods of the initializer to appropriate events as part of VuFind’s bootstrapping process.

Change 3: Custom Template Injector

One of the features of Zend Framework 2 is that, if no template is explicitly specified, the framework injects a default template name into the view model. This default template name is the namespace of the module containing the controller, then the name of the controller, then the name of the action. That interferes with the theme system — we don’t want the namespace on the template name. I created a custom template injector that eliminates the namespace and (due to my own personal preference) also makes sure that URLs are case-insensitive by stripping out dashes caused by camelCase action/controller names. This is set up as part of the theme initialization routine (see the configureTemplateInjection method).

Change 4: View Helper Loading

The original list8d theme solution simply injects helper paths into the helper broker. The framework then searches up the theme inheritance tree until it finds a matching helper. This is easy (no configuration necessary) but it is also slow (every helper initialization requires a search of the file system). Because ZF2 deals with helpers a little differently, I decided to make helper configuration more explicit. Each theme.ini file now includes a helper_namespace setting which specifies where helpers live, and a helpers_to_register[] array which lists all of the helpers that need to be made available. This explicit configuration is obviously less convenient than “magic” auto-loading, but since adding helpers is a relatively infrequent task, the performance benefits seem to justify the change.

I initially set things up so that themes had their own unique namespaces and the theme initializer found the active Zend Autoloader and informed it how to find the helpers in that namespace under the themes directory. I eventually decided this was unnecessarily overcomplicated and scrapped it — now all of VuFind’s view helpers live in the namespace VuFind/Theme/[theme_name]/Helper (which means their code is inside the VuFind module rather than under the theme directories) and take advantage of the default autoloader settings. I’m reasonably happy with this solution, but it’s not hard to change if a better layout is determined in the future.

Change 5: The Tools Class

As I already mentioned, I set up a VuFindThemeInitializer class to do the work of setting up themes. The initializer in turn needs to know a few things: for example, the base path of the application and the place in the session to persist theme settings (to reduce redundant file accesses). Rather than hard-coding these details into the Initializer, I created a VuFindThemeTools class which provides these details to the Initializer’s constructor. This provides an opportunity for using dependency injection to change default behavior and implement unit tests. It also reduces redundancy, since other classes that need access to the same resources (i.e. theme-aware view helpers) can pull data from the Tools class rather than duplicating the dependency initialization.

Ideally, I should probably define a ToolsInterface to guide implementation of alternate tools classes. It may also make sense to split this class into separate pieces to handle different sets of functionality. For now, I’m just using a single catch-all Tools class to keep things simple; it’s always possible to refactor when all the use cases become more clear.

Change 6: The ResourceContainer Class

In the list8d implementation of themes, you can specify CSS and JS files in your theme.ini file to ensure that they are loaded on every page within a given theme. The VuFind implementation extends this to support favicons as well. In the Zend Framework 1 version of all of this code, the theme initialization not only loads these settings from the configuration file but also parses it and configures the framework appropriately.

This is potentially inefficient — some controller actions won’t ever render a page; others will forward from action to action, causing redundant work to be performed.

When I reimplemented this in ZF2, I added an extra layer. The VuFindThemeInitializer loads the settings from the configuration file into a VuFindThemeResourceContainer object provided by the VuFindThemeTools object. The settings are not actually processed until it is actually time to render a page. At that point, a call to a new HeadThemeResources view helper causes the files to get loaded. As with the changes to helper loading, this extra explicit step is slightly inconvenient, but the performance benefits should outweigh that disadvantage — creating a new layout is an uncommon activity, so adding one step to that process shouldn’t inconvenience anyone too severely as long as the process is well documented.

Change 7: View Helpers

The list8d solution provides a custom HeadLink helper that searches the themes in inheritance order to find the best matching CSS file. VuFind’s solution adds a similar custom HeadScript helper and also adds an ImageLink helper which finds the most appropriate image file. Since all of these helpers use similar logic to locate files, they rely on a shared method in VuFindThemeTools to do the bulk of their work.

Future Work

This is a very young solution, and it’s entirely possible that I’ll run into problems that will require some changes and refactoring. I’m also aware that my class names and file locations may not be ideal, and I’m open to feedback on possible improvements there. Finally, it may eventually make sense to build a stand-alone ZF2 ThemeInheritance module and further separate VuFind-specific behavior from the generic theme-related tools. I don’t think this would actually be a huge amount of work, though for now my first priority is to finish the VuFind 2.0beta prototype. Once that is done and the code has been more thoroughly exercised, it may be worth revisiting whether it can be further modularized for better sharing.


I’m grateful to the list8d team for sharing my work, and I hope that my additions and changes will be of use to others. This has been another long, rambling post, and I’m sure there are some details I failed to touch on. Let me know if you have questions about anything.


Moving VuFind to Zend Framework 2: Part 2 — First Impressions

I’ve now been working with Zend Framework 2 for a few days, and I’m starting to get a feel for how it works. It seemed worth sharing a few initial thoughts.


The use of PHP namespaces is a big change, mostly for the better. VuFind 1.x suffered from a total lack of namespacing. This resulted in haphazard include/require statements at the tops of most files and occasional naming conflicts between VuFind classes and external libraries. When I began adapting the code to Zend Framework 1, I used Zend’s autoloader to eliminate all of the includes and requires and prefixed all classes with “VF_” to achieve a sort of namespacing. Using namespaces in ZF2 requires namespace declarations at the tops of all the files, which brings back some of the noise of the include/require statements. However, unlike includes/requires, namespace declarations do not cause a significant performance hit, and they also are more informative to the reader about the dependency structure of the code. Because namespacing allows you to include classes under aliased names, it also means the code that follows the namespace declarations can be made much more readable than the ZF1-style prefix-heavy code. On balance, I think ZF2 has it right, though some small part of me wishes that some of the repetition of the namespace declarations could be avoided somehow.

Dependency Injection

Dependency Injection is one of the the leading buzz-phrases of Zend Framework 2 (and the PHP development world in general). It’s a simple enough concept: instead of having your classes forge their own connections to their dependencies (i.e. using global variables, calling factories/constructors), you instead “inject” those dependencies through constructor parameters and/or setters. This makes your code more flexible, since you can inject different versions of the dependencies in different situations to change behavior. It also makes testing easier, since you can inject dummy objects to test particular classes without having to worry about lots of external details.

Obviously, the problem with dependency injection is that it makes external code responsible for feeding dependencies to classes, which can lead to lengthy and repetitive code. This can become a burden. That’s where Dependency Injection Containers come in — classes which automatically build sets of related classes with all the dependencies properly injected. And that’s where things can start to get complicated and confusing. I don’t claim to have dug into this topic too deeply yet, though I’m sure I’ll gain familiarity as I work with ZF2, since it uses a Dependency Injection Container for much of its internal functionality.

As far as VuFind is concerned, I don’t plan to go crazy with Dependency Injection. More specifically, I don’t plan on using a Dependency Injection Container unless there is a very strong justification for doing so. Unfortunately, PHP as a language is not ideally suited for DI, so there are trade-offs in terms of complexity and/or performance when trying to automate DI through a container. However, I do plan on investigating places where the general idea of DI (minus the automation of a container) can be used to make local customizations and testing easier. One of the beauties of DI is that you can have the best of both worlds: you can build classes that allow injection of dependencies but also auto-generate those dependencies if none are provided. This is probably the direction the code will take, at least for the first iteration.

Configuration vs. Magic

One of the balancing acts in any framework involves the trade-offs between configuration and “magic.” How much functionality just works by convention, and how much does the programmer have to explicitly set up? Too much magic and you end up with code that’s very difficult to debug, since it’s hard to trace where and why things are happening. Too much configuration and simple tasks become a burden to achieve.

ZF2 seems to have a heavier emphasis on explicit configuration than ZF1 did — a lot of this has to do with the fact that some of ZF1’s “magic” led to performance-related problems; for example, auto-loading from a lengthy search path requires a lot of extra file accesses, which take significant time. I admit that I’ve been spoiled by some of ZF1’s magic features. Obviously, ZF2 doesn’t actually prevent you from using some of the same techniques as ZF1, but I would like to use best practices, and I’m going to try to recalibrate the code to use more configuration as I move forward. That being said, one of my highest priorities is making local customization simple, and if that requires a bit of magic, I’m willing to sacrifice a little performance.

The New View Layer

One of the big changes in ZF2 is a major refactoring of the view layer (and MVC in general). ZF1’s view was a bit of a monolithic beast. ZF2 has broken this into smaller chunks that interact with each other. There are now a lot of moving parts to keep track of, but each part is simpler to understand and individually modify than the previous whole. In any case, I don’t expect that the internals of the view layer will have too much impact on my design for VuFind, and the good news is that the actual templates themselves haven’t changed too much. The biggest non-backward-compatible change I’ve run into is the fact that the URL view helper has changed its parameter list, and this is actually a very good thing: the absolute worst feature of ZF1 views was the fact that generating URLs required an unwieldy call to the URL view helper. In ZF2, these calls are now much more concise and understandable — provided that you create a comprehensive router configuration, which I plan to do.


Obviously, my relationship with ZF2 is still very young, but so far my impression is largely positive, and I’m enjoying the process of moving forward. The biggest downside I can see so far is a general increase in verbosity (through namespacing and explicit configuration) which may slightly steepen the learning curve for working with VuFind. However, I think the benefits of these costs (more readable code, better performance, more reliable extensibility) make it worthwhile, and I hope to compensate for complexity through tutorials in the wiki once the design is a bit more stable.

I hope these ramblings have been informative. Comments are welcome if you feel I have misrepresented anything. If you have any questions, feel free to either post them here or take them to the vufind-tech list.


Moving VuFind to Zend Framework 2: Part 1 – Orientation

As you may already know, I’ve been hard at work on updating VuFind‘s architecture in preparation for the forthcoming 2.0 release. The Why VuFind 2.0? page in the wiki describes some of the reasons for this change. So far, I have been extremely happy with the improvements I have been able to make. I feel that the code is now more concise and readable while offering broader and more consistent features. An alpha release is on schedule for early July.

Of course, life with software is never simple. A key decision of the redesign was to use Zend Framework as VuFind’s foundation, on the theory that a widely-used, well-understood, well-documented framework would make the codebase more accessible and standards-driven. I still think this is a wise choice. However, the timing is unfortunate, because the Zend Framework community is just about to release version 2.0 of their software, which breaks backward compatibility in the name of progress. VuFind 2.0 has so far been built on Zend Framework 1.x. Since I don’t want the official finished release of VuFind 2.0 to be built on deprecated architecture, I have some work to do.

I will still release the ZF1-based 2.0alpha in July. It provides a working demonstration of how VuFind 2.0 will look from the outside. However, I hope to follow that as quickly as I can with a ZF2-based 2.0beta so that the final shape of VuFind’s new architecture will also be available for testing.

The problem with moving from ZF1 to ZF2 is that the new version is still under active development (although it’s getting pretty close to stability by now) and the documentation is still fairly incomplete. There is no simple “translate this to that” guide for moving from ZF1. There are also some new techniques (namespaces and dependency injection, with a touch of event-driven programming) that need to be digested in order to understand the new framework. In all, it’s a bit intimidating.

Thus this series of blog posts: as I work through problems, I’ll try to write about them here in case anyone else is struggling with the same things. For now I’m just providing a little background; things should get more interesting once I get deeper into the code. If you want to learn more, check out the resource links on the ZF1 vs. ZF2 page in the VuFind wiki. The Zend webinars in particular are a really helpful way to start learning about what’s going on in the new framework, whether or not you are familiar with ZF1 — I recommend watching a few if you want to be prepared for what’s coming.

Now I’d better get back to digging into code — wish me luck!


Separating Local Code Customizations in PHP


For the past few months, I have been working on a prototype of VuFind 2.0, a reimplementation of the software based on the Zend Framework. I’m very proud of the 1.x series of VuFind releases, and I think they stand pretty well on their own, but the software has been around long enough to begin outgrowing its initial architecture. This reimplementation is designed to clean up some long-standing messes and make the package even more developer-friendly.

One of the big issues for any open source project is figuring out how to deal with local code customizations. A major benefit of open source is that anyone can change it… but changes can come back to bite you when it comes time to upgrade. There are two main strategies that can help alleviate this problem: use a version control system (i.e. Subversion or Git) and try to isolate your changes to separate files rather than changing core files whenever possible. Isolating changes is useful since, even if an upgrade breaks something, it helps you remember exactly what you customized. Version control is of obvious value — if you do have to resort to changing core modules, it helps you keep track of what you did and merge it with future developments.

VuFind already has some powerful mechanisms for isolating local changes from the core — theme inheritance makes user interface customization cleaner, a wealth of configuration file options reduces the need to change core code in many cases, and plug-in mechanisms like record drivers and recommendation modules offer hooks for inserting locally-built code. However, if you need to change some aspect of a core library class, you may still need to resort to editing core code.

The Goal

I have seen packages where you can override classes by copying a core PHP module, pasting it into a different directory, and making your changes to the copy. By taking advantage of a PHP search path that checks the “local” area prior to the “core” area, the package will then load your copy of the file in preference to the core version, allowing you to override the class. While this solution is a step in the right direction as far as avoiding the need to edit core files, it has significant drawbacks — you have to copy an entire class in order to change any one element of it, and when upgrade time comes around, chances are that you’re still going to have to do a significant amount of work to reconcile your locally-copied files with the new core. In fact, I would argue that this solution is actually worse than simply editing the core, since it makes it harder to effectively use version control software to merge changes.

As I see it, a better solution would be to find a way to extend core classes without completely overriding them — i.e. to create a child class that adjusts only the method or methods you need to change, without replacing the entire class. This would encapsulate your local changes in the most concise form possible, and while you still might have to do some reconciliation at upgrade time, good use of object-oriented principles combined with a stable application design could keep problems to a manageable minimum.

The biggest challenge to implementing this is that you run into naming problems. For obvious reasons, PHP doesn’t let you have two classes with the same name. If your core code refers to a class called VF_Search_Object and you want to change the behavior of the getResults() method without editing any other code, how can you do that? Fortunately, there is a way — it’s just a bit tricky.

The Solution

The answer to this problem relies on two key characteristics of PHP: class autoloading and dynamic code generation. With autoloading, PHP has the ability to call a function whenever you attempt to instantiate a class which does not exist. With dynamic code generation, PHP can actually create classes on the fly based on the contents of variables. The trick is to build an autoloader that detects whether or not local customizations have been made and to dynamically generate a new class that derives from either the locally customized version or the original core version as needed.

Still sounds complicated? Fortunately, Zend Framework makes it easier with its powerful autoloader module. The Zend Autoloader gives you a great deal of control over how classes get autoloaded. It can be configured to look at different class name prefixes and load those classes from different directories… or even call different custom autoloader functions. To solve our problem, we need to set up three different class name prefixes:

Core – Any class that begins with “Core_” is core code. Users would never want to directly edit any of these files.

Local – Any class that begins with “Local_” is localized code. Normally these would only exist when a user wanted to customize some piece of functionality, and they would extend a Core_ class with the same name suffix (i.e. the “Local_Example” class would extend the “Core_Example” class).

Extensible – Whenever any code instantiates a class, it will use the “Extensible” prefix instead of “Core” or “Local” — this is how the magic happens, since there should be no classes in PHP files on disk whose name begins with “Extensible_” — instead, the classes will be created dynamically as needed.

Here’s the code that sets this all up using the Zend Autoloader:

$autoloader = Zend_Loader_Autoloader::getInstance();
$autoloader->pushAutoloader('extensibleAutoloader', 'Extensible_');

Very simple — “Core_” and “Local_” are registered as standard namespaces within the autoloader, which means that they will be searched for on disk. “Extensible_” is registered as a special namespace that needs to trigger a custom autoloader called extensibleAutoloader. Here’s the code for that function:

 * Autoloader that allows optional local classes to extend required core classes
 * seamlessly with the help of a particular namespace.
 * @param string $class  Name of class to load
 * @param string $prefix Class namespace prefix
 * @return void
function extensibleAutoloader($class, $prefix = 'Extensible_')
    // Strip the class prefix off:
    $suffix = substr($class, strlen($prefix));

    // Check if a locally modified class exists; if that's not found, try to load
    // the core version.  If nothing is found, throw an exception.
    if (@class_exists('Local_' . $suffix)) {
        $base = 'Local_' . $suffix;
    } else if (@class_exists('Core_' . $suffix)) {
        $base = 'Core_' . $suffix;
    } else {
        throw new Exception('Cannot load class: ' . $class);

    // Safety check -- make sure no crazy code has been injected; these have to be
    // simple class names:
    $base = preg_replace('/[^A-Za-z0-9_]/', '', $base);
    $class = preg_replace('/[^A-Za-z0-9_]/', '', $class);

    // Dynamically generate the requested class:
    eval("class $class extends $base { }");

As you can see, it’s actually pretty simple — extensibleAutoloader() takes advantage of the regular autoloader in combination with “class_exists” to check whether or not localized versions are available. This tells it which base class needs to be extended in order to generate the requested Extensible_ class… then it uses the eval() function to dynamically create the class.

So imagine you run this code:

$z = new Extensible_Sample();

If you haven’t created a Core_Sample or Local_Sample class, you’ll get an exception. But suppose you put this Core_Sample class into your library:

class Core_Sample
    public function __construct()
        echo 'I am a rock.';

Now instantiating the Extensible_Sample object will display “I am a rock.” on screen — the autoloader will find and load Core_Sample but name it Extensible_Sample.

Let’s take it a step further and create a Local_Sample that extends Core_Sample:

class Local_Sample extends Core_Sample
    public function __construct()
        echo 'Some people may think that ';

Now the Extensible_Sample object will display “Some people may think that I am a rock.” Magic!


I’m very happy to see that it is actually possible to achieve this effect — it’s something that I’ve been thinking about for a long time, and I’m happy I was able to make it work. That being said, I’m not sure if it’s worth the effort. I see three major drawbacks:

– This is a powerful mechanism for extending code IF YOU UNDERSTAND IT. But it increases the learning curve for getting into the codebase, since at a glance it will be very confusing to see all these references to Extensible_* classes that don’t actually exist on disk.
– All of the autoloading involved in the solution adds some overhead to the code. I haven’t done testing to see how significant the overhead actually is… but without some kind of caching or PHP acceleration, I have a feeling it might turn out to be somewhat expensive.
– The eval() function is one of the most dangerous features in PHP, since it provides an opportunity for attackers to execute arbitrary code. I believe that the way I’m using it here is safe (especially with the extra regex cleanup I’ve added), but it nonetheless makes me a little nervous.

I would love to hear what other people think of this — is the solution technically sound? Is the benefit worth the cost? At this point, I’m not necessarily committed to implementing this as part of VuFind 2.0 (and obviously the namespaces won’t be “Core_”, “Local_” and “Extensible_” if I eventually do). It could be done, though, and I think it’s worth considering. All feedback is welcome!


Interactive Map Building using the Raphaël JavaScript Library


With the dual challenges of a complex building and constant resource and personnel movement due to construction, Falvey Memorial Library at Villanova University required an easy-to-update method to both guide our patrons around the building and help them find the resources they need. To do this, the Library Technology Team developed an interactive map system built using the Raphaël JavaScript libraries that allows for fast updates and easy map construction, while still allowing the map to be dynamic and interactive. JavaScript and the Raphaël libraries were chosen over other technologies to maximize accessibility by our community.


Nowadays, when people talk about challenges facing the modern library, they most refer to issues and challenges in the digital world. Here at Falvey Library though, one of our daily challenges is still in the brick-and-mortar realm – specifically, the actual brick and mortar that makes up our library building. Starting in summer of 2011 (Monday, August 22 to be exact), Falvey Library began renovations to the building to improve the working space of the library, with one of the goals to merge the different disparate sections of the library building into one (mostly) unified space. This work will go far in reducing the challenge of the physical building which, previous to the construction, has been a patchwork of different additions and extensions, resulting in the library building operating as if it were two separate spaces.

During this process, many of the stacks and collections are being moved around to make way for the construction teams, with many of our collections crossing over from one of the previously mentioned operating spaces into the other. As well, many members of our staff, including our library Director, have moved to temporary offices, with future plans for various staff members to move into newly constructed spaces. A new learning commons is also planned, so whole departments within the library will be moving to take advantage of this new space (including our campus writing center and campus math center, amongst others).

As you can see from above, our library is currently in a state of flux, and with a complex “gestalt” building and constantly moving locations of both collections and people, we have a real challenge on our hands making sure patrons can find the people and resources they need in a timely and efficient manner.

So how can the tech department help?

The Challenge

The quick answer of course is we need a map and directory that can keep up with the changes and can quickly inform patrons on where to find the person or resource they need. This system needs to be flexible enough to handle periodic changes both to the locations of people and resources as well as changes to the physical building itself. Furthermore, it needs to be accessible by the widest possible audience. Finally, since we’re taking the time to do this, why not make the map interactive and code it to work directly with our catalog, pointing patrons directly to the shelf on which a book or resource currently resides?

From above, it looks like we want more than a static map, and thus code is involved. But what method to use to implement our new interactive map?

The first and most obvious choice would be to use Adobe Flash, and a quick spin around the Internet confirms this as the majority of interactive maps are built using this technology. The reason for this is simple – Flash use is widespread in the web universe and Flash itself is relatively cheap to implement. On the other hand, the webverse is starting to turn on Flash as being a dated technology on its way out the door. Now, though I believe the rumors of the demise of Flash to be a bit premature, there’s no denying that the loss of accessibility to the map on the Apple mobile devices is a big loss to our patrons. As much as I don’t like my technology use dictated by the whims of a corporate giant, excising Apple users from the interactive map is too big a loss in community share for me to be comfortable with this choice. Besides, I do share some of Steve Jobs’ criticism of Flash in its bulkiness, incessant need for upgrades, and general sluggishness of pages running Flash applications. So, what else do we have?

Always out to clone and usurp hot technologies, Microsoft has its Flash competitor in Silverlight, our next contestant in the “Interactive Map Technology Showdown”. Now, having been a WPF programmer awhile back (Silverlight having started its life as WPF/Everywhere, so the technologies are very similar) I have to admit to being slightly partial to this choice. Personally, I find WPF/Silverlight coding the most fun of any language I’ve run into in a long time – unfortunately, I have to admit that not only are the issues from Flash above also prevalent in Silverlight, Silverlight has the added huge negative not being nearly as widely adopted as Flash. Therein, Silverlight also isn’t a great choice for the map with our requirement of maximum accessibility.

One more technology of note in this challenge is HTML5, and though HTML5 looks highly promising, but it technically doesn’t exist yet (as of this writing, it’s expected to publish in 2014).

So what’s the winner?

The Solution


More specifically, the backbone of the graphics is powered by the Raphaël JavaScript Library, a free, open source (we like those at Falvey Library) vector graphics library. This library allows me to draw and display any shape using the SVG standard and to add event handlers to these shapes (i.e., adding an event that turns the shape red when the mouse moves over the shape). It also includes a simple library to add animations to shapes (including motion and fading), as well as image manipulation and display.

So, why JavaScript (and the Raphaël library) over the competitors? For one, JavaScript and the Raphaël libraries don’t require a plug-in, whereas Flash and Silverlight do. In terms of accessibility, this puts JavaScript on top of the list (and no annoying software update or broken players). Furthermore, JavaScript is much more lightweight than Flash and Silverlight – for example, JavaScript loads much faster on page load that either Flash or Silverlight (both of which feel sluggish to me in general on page load). Finally, compared to Flash at least, I found that working with JavaScript and Raphaël made making animations much easier. Most of all, by using JavaScript, I am hoping to allow the widest range of users to access the interactive map, as JavaScript is the most universal and dynamic web language available today.

The Implementation

The interactive map is currently implemented on Falvey Library’s website here.

Like JavaScript itself, the whole interactive map system is very lightweight – the system consists of only a few JavaScript files, a JSON file to hold the map data, and a few image files. All the highlighted areas you see (front desk, shelves, etc) are shapes created via Raphaël, but hidden from view until highlighted. The “floors” as well are Raphaël image objects piled on top of each other — to switch pages I just change the order of the image objects in the Raphaël object stack, while pushing the shapes associated with that page on top of the image, so that mousing over them will trigger their event (and since other floors are piled under the image object moved to the top of the stack, you don’t get interference from these objects).

I used the free, open source SVG tool InkScape to determine the graphics coordinates of my shapes. To do this, I simply loaded a floor map into Inkscape, drew each shape using the SVG freedraw tool, saved the image, then finally copied the coordinates of the shape (pulled from the save file) into a JSON file. This process is a bit labor intense (especially since I did every stack row) though it didn’t take me more than a day since each successive shape gets easier once you get the rhythm of the process down. Once this is done, the JSON is updated to hold info on any person, department, or call number range held in each shape — this way, when the map info changes, all I have to do is change a setting in the JSON file and I can quickly manipulate the information on the map.

The final step is to add a set of JavaScript functions to the page header that read the JSON file and build the page. All the extra JavaScript functionality (determining which range an entered call number is in, etc.) is also stored in the header.

The Way Forward

Right now a lot of my JavaScript code is tied into our CMS (Concrete5). I hope to refactor the code soon to break this dependency and allow easier code adaptation by other institutions. As well, there are currently way too many hard coded items in the code – these need to be moved out into a config file or JSON file to allow for much greater flexibility in updating maps (I’ve begun this process but have a long way to go). Paradoxically, I hope in the near future to packagize and submit the map software to the Concrete5 marketplace (under the GPL or MIT licence) to allow the general Concrete5 community to take advantage of this code

If you’re interested in more technical details, or are considering adopting this system in your own website, please don’t hesitate to contact me at david.uspal@villanova.edu and I’d be happy to field your comments or questions.


Expanded ILS Functionality in VuFind

VuFind uses simple PHP classes called ILS drivers to communicate with external integrated library systems in order to obtain information and perform actions that are outside the scope of its own index and database. This includes things like listing a patron’s checked out items or determining whether books are currently on the shelf. In the past, VuFind’s drivers have been fairly week with regard to important patron activities like placing holds and renewing books. Several libraries have implemented local customizations to support these features, but the native support involved, at best, linking off to a page in a third-party OPAC.

With the forthcoming VuFind 1.2 release (date not yet determined, but probably late summer or early fall), all that will change. The VuFind driver model has been updated with robust support for expanded patron functionality (thanks largely to the tireless efforts of Luke O’Sullivan, who has been collaborating with me for months on this problem). The ILS Driver Specification has already been updated to reflect the new features, but since this is somewhat complicated, I thought a more narrative explanation of how the new features work might be beneficial.

This article is designed to explain exactly what you need to do to add hold, recall and renewal functionality to your ILS driver. It will also touch on some of the infrastructure changes in VuFind needed to support these new features, and some general best practices for extending drivers. As always, if you want more detail on anything, you are free to contact me through comments on the blog or the VuFind mailing lists.

Basic Principles

One of the complicated things about implementing a generic system for dealing with things like holds and renewals is that different systems have different capabilities and rely on different data in order to achieve these actions. Our design tries to keep as much logic inside the ILS driver as possible. VuFind interacts with the driver in two key ways:

• It queries the driver (by checking for the existence of certain methods and/or using the getConfig method) to determine which features are available. Unsupported capabilities will simply be hidden from the end user.
• It tries to feed the driver with its own data as much as possible. In many cases, the inputs to some methods are outputs from other methods. VuFind makes no assumptions about the contents of the data — it just pushes it to the appropriate places. Associative arrays and delimited strings are the driver author’s friends — these can be used to encapsulate whatever data the driver needs, and VuFind will make sure they end up in the right places. This should all become more clear when you see some examples below!

The Least Common Denominator

As mentioned earlier, the simplest way to support advanced ILS features is to simply link to the ILS’ native OPAC. This does not generally provide a good user experience, but sometimes it is the only option. There are several methods you can implement if you want (or need) to settle for this minimal level of functionality:


While getHoldLink has been around for a long time, the other two methods are new… and both of them demonstrate the “driver using its own data” principle discussed above. getCancelHoldLink is fed with an entry from the array returned by getMyHolds, while getRenewLink is similarly fed from getMyTransactions. This is very convenient: when you’re retrieving information from the ILS about current holds or checkouts, it’s easy enough to pull whatever details are needed to link to the system’s OPAC… then you simply assemble it into a URL in the getLink method and you’re done!

Placing Holds Inside VuFind

Obviously, the ideal solution is not linking to a legacy system; it’s filling out a form within VuFind itself. Fortunately, this is now achievable. It requires a few methods to be implemented:

getConfig – Before offering holds functionality, VuFind will call the getConfig method with a parameter of “Holds”. As the driver spec describes in more detail, the method needs to return an associative array containing entries VuFind uses to render the hold form correctly. It is up to you whether to hard-code these values in your ILS driver or pass them along from the driver’s .ini file. The most critical key is the “HMACKeys” value, which tells VuFind which form fields to use in generating an HMAC message authentication code that helps prevent users from placing holds on items that they are not supposed to. If you omit HMACKeys, VuFind will assume that native holds are disabled and will fail over to the getHoldLink approach.
getHolding – Chances are you already have a getHolding method in your driver, but you may need to augment it with some extra fields in the return array if you need extra data to place holds (for example, a “hold” vs. “recall” status, or an item ID in place of a bib ID). Fortunately, you can include any field of the getHolding return array as part of getConfig’s HMACKeys list in order to ensure that it is passed along to the placeHold method below. This allows you to pass any or all necessary data without VuFind having to know exactly what is needed! If possible, you should also make sure that your getHolding array includes the “addlink” key indicating whether or not the current user is allowed to place a hold on the current item — this key makes it possible to use the “driver” option in config.ini’s Catalog:holds_mode setting, which is usually the smartest way for VuFind to present links.
placeHold – This method receives an associative array containing patron information from patronLogin along with whatever hold form fields were activated through the settings returned by getConfig. It is responsible for actually placing the hold and then returning a success or failure status.

There are quite a few small details to line up here, but the important thing is that the driver specifies what data is needed, provides all of that data, and then uses it to place the hold. All VuFind does is pass the messages from one place to another!

…and the rest

If you understand how holds work, the other new features are very similar, only slightly less complicated. A quick summary:

• To cancel holds, implement getCancelHoldDetails (which generates an identifier string using data passed to it from getMyHolds) and cancelHolds (which actually cancels holds based on patron data and an array of strings generated by getCancelHoldDetails).
• To renew items, implement getRenewDetails (which generates an identifier string using data passed to it from getMyTransactions) and renewMyItems (which actually renews items based on patron data and an array of strings generated by getRenewDetails). Also be sure that getMyTransactions includes an appropriate “renewable” key in its return array.

A Final Word on Object Orientation

That covers how to make holds work… but there’s one more detail that may affect driver authors. It is often the case that an ILS requires a version upgrade or a for-pay API plug-in to support these advanced features. In these situations, some users may want the full functionality, while others may require a more stripped-down version that only supports basic features. This is certainly the case for Voyager, where Voyager 6 users will have to settle for the old getHoldLink functionality while Voyager 7 users may have access to a RESTful API that allows every imaginable bell and whistle. Fortunately, PHP’s object-oriented model offers a simple solution: implement minimal functionality as a base class, then override and add methods in a child class to expand functionality.

The Voyager.php and VoyagerRestful.php drivers are an example of this technique in action. Similar work has been done for Horizon users with and without access to its XML API.

One useful design pattern you may notice if you look at the code for these existing drivers is that large chunks of key methods have been broken out into support methods: one that generates SQL in an abstracted associative array format and one that processes the database response. This makes it relatively easy for a child class to inject a couple of new fields into a query or process data slightly differently without having to copy and paste a large, complex method from the parent class. This design pattern is not only useful for implementing holds functionality; it’s also very handy for making minor local customizations to drivers.


Using Dismax for VuFind’s Advanced Search

The Problem

One of the complexities of dealing with Solr searching is the fact that it has multiple query parsers with different strengths and weaknesses. The “Standard” query parser (sometimes referred to as the “Lucene” parser) offers traditional features like wildcards and boolean operators, but it doesn’t always do a good job when you need to search multiple index fields at the same time. The “Dismax” query parser uses fancy logic to do cross-fielded keyword searching that often seems to work like magic, but it lacks support for all the operators found in the Standard parser. VuFind currently uses a blend of these two mechanisms — most of the time, it relies on the Dismax handler, since that tends to yield the best results… but when a search contains features that Dismax can’t cope with (like a boolean AND or a * wildcard), it fails over to the Standard handler.

One of the big limitations of this situation was that VuFind’s advanced search screen always generated a Standard query, since the advanced search form forces the use of boolean operators, and Dismax doesn’t support booleans. This meant that advanced searches were often slightly inconsistent with basic searches, not to mention being slightly less effective in some cases. Fortunately, due to some little-known and little-documented Solr features, the next VuFind release will address this problem.

The Solution

As it turns out, the Standard query parser supports a pseudo-field called “_query_” which allows you to combine multiple non-Standard queries using Standard operators. You can specify the parser to use in each subquery through the {!parser} syntax. As a result, as long as each individual field of the advanced search form can be handled by Dismax, it is possible to use the Dismax parser for the separate chunks of the advanced search while still combining the chunks together using the Standard parser’s boolean capabilities!

For example, suppose you wanted to combine a Dismax author search with a Dismax title search. You could do it through this Standard search:

_query_:”{!dismax qf=”author^100 author2^50″}charles dickens” AND _query_:”{!dismax qf=”title^100 alt_title^50″}tale of two cities”

This will perform two Dismax searches (note that you can specify qf boosts inline) and then return only the results that match both of them. It’s not pretty thanks to the need to escape quotes inside the subquery, but it works… and attractiveness doesn’t really matter when it’s all generated automatically by code. Admittedly, VuFind’s search generation logic is fairly convoluted right now, but adding support for this capability only required the addition of a few more lines, as you can see from the patch posted in JIRA, and the benefits are significant.

The Future

Hopefully things can be improved even further in the near future. The latest release of Solr (version 3.1) adds an “extended Dismax” parser which combines many of the best features of the Standard and Dismax parsers. This should greatly reduce the number of situations in which we need to use Standard instead of Dismax, and it may even eliminate the need for the current nest of recursive code that builds cross-field-capable Standard queries. Once I find time to upgrade VuFind’s Solr instance to the new version, I will begin investigating how much of the search logic can be simplified through the use of this new feature.


Java Tuning Made Easier

If you run a constantly-growing Solr index (as many VuFind users do), chances are that sooner or later, you will need to do some Java tuning in order to solve performance problems. There are some good resources already on the web about this topic (for example, Sun’s Java Tuning White Paper), but they tend to be somewhat dense and technical. This article is intended to give a shorter introduction to the problem and the most basic strategies for solving it. If you need more details, by all means refer to more technical sources; I just wanted to offer an easier starting point.

Why Java Needs Tuning

The main reason Java needs tuning has to do with how it handles memory management. I was studying computer science when my university switched its curriculum from C++ to Java, so I’m very familiar with Java’s distinctive approach to this subject. In C++, the programmer is responsible for all the fine details of memory management — you have to request all of the memory that you plan to use, then return it to the operating system when you are done with it. Failing to do this properly leads to the dreaded “memory leak.” Java relieves this burden by taking an entirely different approach: the programmer uses memory without worrying about where it came from, and Java uses something called a “garbage collector” to figure out which pieces of memory are no longer needed and free them up for others to use. The C++ to Java transition caused many lessons to abruptly change from “memory management is of vital importance to all of your work” to “don’t worry about memory management; the magic box will do it for you.”

Usually, sparing the programmer from worrying about memory is a great improvement — it removes a lot of tedium from the work of writing code, and most of the time, the garbage collector just does its job, and nobody has to think about it. The problem is that for complex, memory-hungry applications like Solr, the garbage collector sometimes can’t keep up. The longer the program runs, the more time Java spends on garbage collection and the less time it spends on actually running the program. In extreme situations, a Java program can become completely unresponsive, devoting all of its effort to cleaning up after itself. If you run into problems with VuFind searches becoming extremely slow and find that the problem goes away after you restart VuFind, the cause is almost certainly the garbage collector. A restart frees up all memory and gives Java a clean start, so it’s usually an easy fix to performance problems… but it’s only a matter of time before garbage once again accumulates to a critical level and the problem returns!

Possible Solutions

There are basically three answers to the Java garbage collection problem, and you don’t have to pick just one. Using multiple strategies at the same time often makes sense.

• Regularly restart your Java application — call it cheating or postponing the inevitable if you like, but it’s a very simple approach: if it takes several days for your application to start performing poorly, just schedule it to automatically restart in the middle of the night every night to get a clean slate and consistent stability.

• Give Java more memory — intense garbage collection is triggered by high memory use, so the more memory you have available, the longer it will take for a program to fill it all. Adding memory reduces (or at least postpones) the need for garbage collection, and it’s as simple as changing a couple of parameters (see details in the VuFind wiki). Generally, more memory is always better… but there is one important caveat: don’t give Java all of your system’s available memory, since that can crowd out your operating system and cause other performance problems — always leave a bit of a buffer.

• Change garbage collector behavior — Java has several different garbage collection strategies available, and some of them have additional tuneable parameters. This is where things start to get complicated, but Lucid Imagination’s Java Garbage Collection Boot Camp offers a good run-down of the available choices (not to mention providing some more detailed technical background). Even if you don’t understand all the gory details, knowing the available options means you can do some trial and error.

Testing Your Strategy

Trial and error is an inevitable part of solving Java tuning problems. The biggest shortcoming I found in other articles on the subject is that they don’t offer a simple strategy for doing this. Fortunately, it’s not too hard to test your progress using some simple tools.

Java is capable of recording a log of all of its garbage collection behavior, telling you how often it performs garbage collection and how long each collection takes to complete. While the exact parameter for generating a log may vary depending on the Java Virtual Machine that you are using, for VuFind’s preferred OpenJDK version, you can add something like this to your Java options:


As you can probably guess, this outputs the garbage collection data to a log file called /tmp/garbage.log. If you want something fancier, you could do this instead:

-Xloggc:$VUFIND_HOME/solr/jetty/logs/gc-`/bin/date +%F-%H-%M`.log

Through the magic of the Unix shell, this version stores logs inside VuFind’s solr/jetty/logs folder, naming each log file with the date and time that VuFind started up so that you can track behavior across multiple restarts.

So far, so good… except that these log files are really hard to read. Fortunately, an excellent tool exists to help visualize the data: gcviewer. With gcviewer, you can see a graph of your memory usage and the time spent on garbage collection, plus there are a number of handy statistics available (average collection time, total collection time, longest collection time, etc.). If gcviewer doesn’t meet your needs, there is also an IBM tool called PMAT which is slightly less convenient to download but which supports a broader range of log formats.

By logging data for several days between each tweak to your Java settings and using gcviewer or PMAT to analyze your logs, you can usually get a pretty good sense of whether you’ve made things better or worse… and how long it takes for your application to fall into the pit of inefficient garbage collection.


Java tuning is never going to be an easy subject to understand deeply, but that doesn’t mean you need to be afraid of it. There are several simple strategies available to help solve your problems even if you don’t know all the details of what is going on under the hood, and there are readily available tools to help you support your inevitable trial and error with empirical data. In fact, even if you are experiencing perfect performance today, it might not be a bad idea to examine garbage collection logs occasionally to see if you can prevent future problems before they become noticeable! Magic problem-solving boxes are great most of the time, but a bit of knowledge is always helpful for those times when they let you down.


Highlighting and Snippets in VuFind 1.1

One of the perils of keyword-based searching is that sometimes it is not totally clear why certain results show up after performing a search. Fortunately, two common conventions help ease this problem: highlighting matching keywords and displaying snippets of text to show matches in context. The Solr index engine has supported both of these features for a long time, but VuFind has only provided robust support for them starting in version 1.1.

Activating Highlighting and Snippets in VuFind

As a VuFind administrator, if you want to take advantage of these new features, all you have to do is upgrade to VuFind 1.1 and they will be turned on by default. If you want to turn them off or adjust some of the behavior, you can make a few adjustments to your searches.ini file as described in the VuFind wiki. Unless you are interested in the technical workings behind the scenes, that is all you need to know. Have fun! Solr power users, VuFind developers and other interested techies, please read on….

Highlighting and Snippets at the Solr Level

Solr’s support for highlighting and snippets is straightforward. By means of some search parameters (set in the solrconfig.xml configuration file and/or as part of the search request), you tell Solr whether or not to apply highlighting, which fields to highlight, how to mark highlighted words, and so forth. When highlighting is requested, Solr adds a new section to its search response listing all of the highlighted phrases found in all of the documents in the search response. The highlighting information is completely separate from the main list of search results, so highlighting does not actually alter the main part of the Solr response — the details need to be merged in by the calling code.

Problem #1: Marking Highlighted Text

One of the first problems that needs to be addressed is how to mark highlighted words in the Solr response. Solr provides hl.simple.pre and hl.simple.post parameters which can be used to specify text to mark the beginning and ending of highlighted words. The obvious first temptation is to simply stick some HTML in here — "<em>" and "</em>", for example. This can lead to pitfalls, however — if you are escaping your output, the HTML won’t make it through, and the end user will actually see the HTML code. If you are not escaping your output, then text between or around the emphasis tags may get misinterpreted as HTML, leading to garbled displays (never assume you won’t have angle brackets somewhere in your records!).

VuFind’s solution to this problem is fairly obvious — it uses markers that are extremely unlikely to show up in record text (“{{{{START_HILITE}}}}” and “{{{{END_HILITE}}}}”) and defines a special escaping routine used only for highlighted text. When displaying something that it knows has been highlighted, it first escapes any possible HTML entities, and THEN it replaces the highlighting markers with HTML code that achieves the actual highlighting logic. You can see the Smarty modifier that achieves this work here. Note that the Smarty code contains some extra logic for finding and highlighting words, since it is also designed for use by other modules of VuFind that are unable to rely on Solr’s highlighting capabilities — this logic is ignored when Solr results are being displayed.

Problem #2: Merging Highlighting Data with Records

As mentioned above, Solr provides highlighting information completely separately from its search result list. This can be rather inconvenient since it requires code to look in two different places during record processing. The first temptation when encountering this problem is to write code that merges everything together, overwriting fields in the main response with highlighted versions found elsewhere in the response. However, as with many first temptations, that’s a bad idea. First of all, you will very likely lose data if you do this. In a multi-valued field, it is possible that only certain values will be highlighted and others omitted entirely. Also, unless the hl.fragsize parameter is set to 0, snippets will be truncated to only show a few words around the highlighted term. Additionally, data loss aside, it is often convenient to have both highlighted and non-highlighted versions of fields available; for example, if you want to create a link to a page about an author, you want to use the non-highlighted text for inclusion in the target URL, but you want to use the highlighted version to display the link text.

Again, VuFind works through these issues in a fairly straightforward way. For convenience, it does merge the highlighting data with the search results so that code doesn’t need to look in two completely separate arrays for information about each record. However, it doesn’t overwrite any fields; instead, it creates a fake “_highlighting” field within the body of the record and stores all of the highlighting details in there. Whenever VuFind displays a field that might be subject to highlighting, it looks in two places — first it checks the _highlighting array and displays properly processed, highlighted text if it finds any. If no highlighted version exists, it fails over to the standard, non-highlighted text. Admittedly, this adds a bit more complexity to the display templates, but it seems a reasonable price to pay to ensure data integrity. It also helps to remind template designers where they need to use the Smarty highlight modifier described above, greatly reducing the risk of any “{{{START_HILITE}}}” tags accidentally slipping through to the end user’s display.

Problem #3: Highlighted Text May Be Truncated

As discussed above, highlighted text may be truncated in some circumstances (by default, snippets are limited to about 100 characters). This is reasonable, since search results should be brief and easy to read. Indeed, even before it supported highlighting, VuFind already had code to trim down super-long titles in search results. The critical difference between the old title-trimming code and the new reliance on Solr snippets is that the old code always showed the beginning of a title, while Solr snippets occasionally come from the middle of a title, yielding strange-looking results. Setting the hl.fragsize parameter to 0 is an option, though that will lead to very long titles in search results. VuFind’s solution relies on another new Smarty modifier (modifier.addEllipsis.php) which compares highlighted text against non-highlighted text and adds periods of ellipsis on each end if truncation is detected. This may not be a perfect solution, but at least it adds a little more visual context to the truncated text.

There is one additional caveat that should be noted: multi-valued fields are still a problem. If a field contains five values and only two of them match search terms, then the highlighting data will only contain (at most) two values. VuFind does not currently contain any mechanisms for matching up partial highlighted results with longer lists of non-highlighted results. The problem is avoided in the simplest way possible: the highlighted fields currently used in VuFind’s search result templates (title and primary author) are single-valued. Multi-valued fields are only displayed as snippets (see below).

Problem #4: Displaying Snippets

As discussed above, there are certain Solr fields which VuFind will always display in search results: most importantly, title and author. However, keyword matches may fall outside of these displayed fields. For that reason, it is helpful to display snippets showing matches in other fields. Since there may be many snippets, and the search result listing should be kept reasonably brief, it makes sense to try to display just one snippet, preferably the most relevant one.

Snippet selection is handled by the IndexRecord record driver, the base class that handles display of all records retrieved from the Solr index. This class contains two arrays: $preferredSnippetFields, an array of fields that are very likely to have good snippet data and should be checked first, and $forbiddenSnippetFields, an array of fields with bad or redundant data that should never be considered for use as a snippet. By default, $preferredSnippetFields contains subject headings and table of contents entries, since these tend to offer valuable information, while $forbiddenSnippetFields contains author and title fields (unnecessary for snippets since they are always displayed elsewhere in the template), ID values (obviously uninformative) and the spelling field (a jumble of data duplicated from other fields, necessary for spell checking but misleading as a snippet). The getHighlightedSnippet method uses these arrays to pick a single best snippet, first checking the preferred fields and then taking the first available non-forbidden field if necessary. Since the method and its related arrays are all protected, it is possible to extend the IndexRecord class and create custom behavior as needed on a driver-by-driver basis.

One further detail helps make things more clear: some snippets make little sense out of context, so searches.ini contains a [Snippet_Captions] section where Solr fields can be assigned labels that will be used as captions in front of snippets. Snippets for fields not listed in this section will display as stand-alone, uncaptioned lines in the search results.


Highlighting and snippets really aren’t too difficult to work with, but as with almost anything, they turn out to be a little more complicated than expected once you look at all of the details. I hope this post has helped point out the most obvious pitfalls and explain the reasoning behind VuFind’s implementation. There is still plenty more that could be done — some of the behavior could be made even smarter, and more of Solr’s power could be exposed through VuFind configuration settings. If you have ideas or questions, please feel free to share them as comments on this post or via the vufind-tech mailing list.


Welcome to the Villanova Library Technology Blog

Since I often read and enjoy Jonathan Rochkind’s blog, where he goes into great detail about the complexities of life as a library programmer, I was pleased when he asked me to write a bit about some of the new features in VuFind 1.1.  That post will be coming up shortly.  In the meantime, thank you, Jonathan, for prompting the creation of this blog.  I hope this will become a useful resource for keeping up with the latest developments from Villanova’s library technology team and that the information here will be interesting and informative whether or not you use our software.  Stay tuned for periodic posts about how we have approached various problems during the course of our work on VuFind, the forthcoming VuDL digital library package, and other library-related technologies.


« Previous Page


Last Modified: March 22, 2011