Replicating Data
Like all web applications, the Jobs module’s API can be used programmatically as well as interactively. We can imagine a web-client script that would replicate data across multiple dhttp nodes. It would perform the following steps:
Invoke each instance’s viewer.
Parse the resulting HTML table.
Build a master data set.
Create and invoke the update URLs needed to transmit the newest version of each record to each node.
That’s doable but tedious. Not because it’s hard to fetch and parse the data—it’s actually trivial to convert an HTML table into lists and hashtables. What’s tedious is building the fetch and update URLs. Each plug-in exports its own unique web API, so scripts that use those APIs are hardwired to each plug-in.
What if we were to turn dhttp into an SQL server?
The public function do_engine_sql( )
shown in
Example 15.7 does just that. It accepts a URL-encoded
SQL query, and returns a result set formatted as a list-ofs.
Amazingly, in just 25 lines of Perl, this function transforms a
Windows PC into a low-intensity database server. Suddenly the ODBC
interface and the Jet engine—components that exist on a vast
number of desktop machines—can export SQL capability to local
or remote web clients.
Example 15-7. Turning dhttp into a Lightweight SQL Server
sub do_engine_sql { my ($args) = @_; my ($argref) = Engine::PrivUtils::getArgs($args); my ($st) = Engine::PrivUtils::unescape($$argref{st}); my ($conn) = Engine::PrivUtils::unescape($$argref{conn}); my ($dbuser) ...
Get Practical Internet Groupware now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.