This chapter gives recipes for making use of MongoDB in your Lift application. Many of the code examples in this chapter can be found at https://github.com/LiftCookbook/cookbook_mongo.
Add the Lift MongoDB dependencies to your build and configure a connection using net.liftweb.mongodb
and com.mongodb
.
In build.sbt, add the following to libraryDependencies
:
"net.liftweb"
%%
"lift-mongodb-record"
%
liftVersion
In Boot.scala, add:
import
com.mongodb.
{
ServerAddress
,
Mongo
}
import
net.liftweb.mongodb.
{
MongoDB
,
DefaultMongoIdentifier
}
val
server
=
new
ServerAddress
(
"127.0.0.1"
,
27017
)
MongoDB
.
defineDb
(
DefaultMongoIdentifier
,
new
Mongo
(
server
),
"mydb"
)
This will give you a connection to a local MongoDB database called
mydb
.
If your database needs authentication, use MongoDB.defineDbAuth
:
MongoDB
.
defineDbAuth
(
DefaultMongoIdentifier
,
new
Mongo
(
server
),
"mydb"
,
"username"
,
"password"
)
Some cloud services will give you a URL to connect to, such as mongodb://alex.mongohq.com:10050/fglvBskrsdsdsDaGNs1. In this case, the host and the port make up the first part, and the database name is the part after the /.
If you need to turn a URL like this into a connection, you can do so by
using java.net.URI
to parse the URL and make a connection:
object
MongoUrl
{
def
defineDb
(
id
:
MongoIdentifier
,
url
:
String
)
{
val
uri
=
new
URI
(
url
)
val
db
=
uri
.
getPath
drop
1
val
server
=
new
Mongo
(
new
ServerAddress
(
uri
.
getHost
,
uri
.
getPort
))
Option
(
uri
.
getUserInfo
).
map
(
_
.
split
(
":"
))
match
{
case
Some
(
Array
(
user
,
pass
))
=>
MongoDB
.
defineDbAuth
(
id
,
server
,
db
,
user
,
pass
)
case
_
=>
MongoDB
.
defineDb
(
id
,
server
,
db
)
}
}
}
MongoUrl
.
defineDb
(
DefaultMongoIdentifier
,
"mongodb://user:pass@127.0.0.1:27017/myDb"
)
The full URL scheme for MongoDB is more complicated, allowing for multiple hosts and connection parameters, but the previous code handles optional username and password fields and may be enough to get you up and running with your MongoDB configuration.
The DefaultMongoIdentifier
is a value used to identify a particular connection. Lift keeps a map of identifiers to connections, meaning you can connect to more than one database. The common case is a single database, and that is usually assigned to DefaultMongoIdentifier
.
However, if you do need to access two MongoDB databases, you can create a new identifier and assign it as part of your record. For example:
object
OtherMongoIdentifier
extends
MongoIdentifier
{
def
jndiName
:
String
=
"other"
}
MongoUrl
.
defineDb
(
OtherMongoIdentifier
,
"mongodb://127.0.0.1:27017/other"
)
object
Country
extends
Country
with
MongoMetaRecord
[
Country
]
{
override
def
collectionName
=
"example.earth"
override
def
mongoIdentifier
=
OtherMongoIdentifier
}
The lift-mongodb-record
dependency itself depends on another Lift module, lift-mongodb
, which provides connectivity and other lower-level access to MongoDB. Both bottom out with the MongoDB Java driver.
Connection configuration that includes replica sets and MongoDB options, such as timeout settings, are described on the Lift wiki.
The full MongoDB connection format is described in Connection String URI Format.
Create a MongoDB record that contains a MongoMapField
:
import
net.liftweb.mongodb.record._
import
net.liftweb.mongodb.record.field._
class
Country
private
()
extends
MongoRecord
[
Country
]
with
StringPk
[
Country
]
{
override
def
meta
=
Country
object
population
extends
MongoMapField
[
Country
,Int
](
this
)
}
object
Country
extends
Country
with
MongoMetaRecord
[
Country
]
{
override
def
collectionName
=
"example.earth"
}
In this example, we are creating a record for information about a country,
and the population
is a map from a String
key, representing a city in that country, to an Integer
value, representing the population of that city.
We can use it in a snippet like this:
class
Places
{
val
uk
=
Country
.
find
(
"uk"
)
openOr
{
val
info
=
Map
(
"Brighton"
->
134293
,
"Birmingham"
->
970892
,
"Liverpool"
->
469017
)
Country
.
createRecord
.
id
(
"uk"
).
population
(
info
).
save
}
def
facts
=
"#facts"
#>
(
for
{
(
name
,
pop
)
<-
uk
.
population
.
is
}
yield
".name *"
#>
name
&
".pop *"
#>
pop
)
}
When this snippet is called, it looks up a record by _id
of uk
or
creates it using some canned information. The template to go with the
snippet could include:
<div
data-lift=
"Places.facts"
>
<table>
<thead>
<tr><th>
City</th><th>
Population</th></tr>
</thead>
<tbody>
<tr
id=
"facts"
>
<td
class=
"name"
>
Name here</td><td
class=
"pop"
>
Population</td>
</tr>
</tbody>
</table>
</div>
In MongoDB, the resulting data structure would be:
$
mongo
cookbook
MongoDB
shell
version:
2
.
0
.
6
connecting
to:
cookbook
>
show
collections
example.earth
system.indexes
>
db.example.earth.find().pretty()
{
"_id"
:
"uk"
,
"population"
:
{
"Brighton"
:
134293
,
"Birmingham"
:
970892
,
"Liverpool"
:
469017
}
}
If you do not set a value for the map, the default will be an empty map, represented in MongoDB as the following:
{
"_id"
:
"uk"
,
"population"
:
{
}
}
An alternative is to mark the field as optional:
object
population
extends
MongoMapField
[
Country
,Int
](
this
)
{
override
def
optional_?
=
true
}
If you now write the document without a population
set, the field will be omitted in MongoDB:
>
db.example.earth.find();
{
"_id"
:
"uk"
}
To append data to the map from your snippet, you can modify the record to supply a
new Map
:
uk
.
population
(
uk
.
population
.
is
+
(
"Westminster"
->
81766
)).
update
Note that we are using update
here, rather than save
. The save
method is pretty smart and will either insert a new document into a MongoDB collection or replace an existing document based on the _id
. Update is different: it detects just the changed fields of the document and updates them. It will send this command to MongoDB for the document:
{
"$set"
:
{
"population"
:
{
"Brighton"
:
134293
,
"Liverpool"
:
469017
,
"Birmingham"
:
970892
,
"Westminster"
:
81766
}
}
You’ll probably want to use update
over save
for changes to existing records.
To access an individual element of the map, you can use get
(or value
):
uk
.
population
.
get
(
"San Francisco"
)
// will throw java.util.NoSuchElementException
or you can access via the standard Scala map interface:
val
sf
:
Option
[
Int
]
=
uk
.
population
.
is
.
get
(
"San Francisco"
)
You should be aware that MongoMapField
supports only primitive types.
The mapped field used in this recipe is typed String => Int
, but of course
MongoDB will let you mix types such as putting a String
or a Boolean
as a population value.
If you do modify the MongoDB record in the database outside of Lift and mix types, you’ll get a java.lang.ClassCastException
at
runtime.
There’s a discussion on the mailing list regarding the limited type support in MongoMapField
and a possible way around it by overriding asDBObject
.
Use EnumNameField
to store the string value of the enumeration. Here’s an example using days of the week:
object
DayOfWeek
extends
Enumeration
{
type
DayOfWeek
=
Value
val
Mon
,
Tue
,
Wed
,
Thu
,
Fri
,
Sat
,
Sun
=
Value
}
We can use this to model someone’s birth day-of-week:
package
code.model
import
net.liftweb.mongodb.record._
import
net.liftweb.mongodb.record.field._
import
net.liftweb.record.field.EnumNameField
class
Birthday
private
()
extends
MongoRecord
[
Birthday
]
with
StringPk
[
Birthday
]{
override
def
meta
=
Birthday
object
dow
extends
EnumNameField
(
this
,
DayOfWeek
)
}
object
Birthday
extends
Birthday
with
MongoMetaRecord
[
Birthday
]
When creating records, the dow
field will expect a DayOfWeek
value:
import
DayOfWeek._
Birthday
.
createRecord
.
id
(
"Albert Einstein"
).
dow
(
Fri
).
save
Birthday
.
createRecord
.
id
(
"Richard Feynman"
).
dow
(
Sat
).
save
Birthday
.
createRecord
.
id
(
"Isaac Newton"
).
dow
(
Sun
).
save
Take a look at what’s stored in MongoDB:
>
db.birthdays.find()
{
"_id"
:
"Albert Einstein"
,
"dow"
:
"Fri"
}
{
"_id"
:
"Richard Feynman"
,
"dow"
:
"Sat"
}
{
"_id"
:
"Isaac Newton"
,
"dow"
:
"Sun"
}
The dow
value is the toString
of the enumeration, not the id
value:
Fri
.
toString
// java.lang.String = Fri
Fri
.
id
// Int = 4
If you want to store the ID, use EnumField
instead.
Be aware that other tools, notably Rogue, expect the string value, not the integer ID, of an enumeration, so you may prefer to use EnumNameField
for that reason.
Using Rogue introduces Rogue.
You have a MongoDB record, and you want to embed another set of values inside it as a single entity.
Use BsonRecord
to define the document to embed, and embed it using
BsonRecordField
. Here’s an example of storing information about an
image within a record:
import
net.liftweb.record.field.
{
IntField
,
StringField
}
class
Image
private
()
extends
BsonRecord
[
Image
]
{
def
meta
=
Image
object
url
extends
StringField
(
this
,
1024
)
object
width
extends
IntField
(
this
)
object
height
extends
IntField
(
this
)
}
object
Image
extends
Image
with
BsonMetaRecord
[
Image
]
We can reference instances of the Image
class via BsonRecordField
:
class
Country
private
()
extends
MongoRecord
[
Country
]
with
StringPk
[
Country
]
{
override
def
meta
=
Country
object
flag
extends
BsonRecordField
(
this
,
Image
)
}
object
Country
extends
Country
with
MongoMetaRecord
[
Country
]
{
override
def
collectionName
=
"example.earth"
}
To associate a value:
val
unionJack
=
Image
.
createRecord
.
url
(
"http://bit.ly/unionflag200"
).
width
(
200
).
height
(
100
)
Country
.
createRecord
.
id
(
"uk"
).
flag
(
unionJack
).
save
(
true
)
In MongoDB, the resulting data structure would be:
>
db.example.earth.findOne()
{
"_id"
:
"uk"
,
"flag"
:
{
"url"
:
"http://bit.ly/unionflag200"
,
"width"
:
200
,
"height"
:
100
}
}
If you don’t set a value on the embedded document, the default will be saved as:
"flag"
:
{
"width"
:
0
,
"height"
:
0
,
"url"
:
""
}
You can prevent this by making the image optional:
object
image
extends
BsonRecordField
(
this
,
Image
)
{
override
def
optional_?
=
true
}
With optional_?
set in this way, the image part of the MongoDB document
won’t be saved if the value is not set. Within Scala you will then want
to access the value with a valueBox
call:
val
img
:
Box
[
Image
]
=
uk
.
flag
.
valueBox
In fact, regardless of the setting of optional_?
, you can access the
value using valueBox
.
An alternative to optional values is to always provide a default value for the embedded document:
object
image
extends
BsonRecordField
(
this
,
Image
)
{
override
def
defaultValue
=
Image
.
createRecord
.
url
(
"http://bit.ly/unionflag200"
).
width
(
200
).
height
(
100
)
}
The Lift wiki describes BsonRecord
in more detail.
Create a reference using a MongoRefField
such as ObjectIdRefField
or
StringRefField
, and dereference the record using the obj
call.
As an example, we can create records representing countries, where a country references the planet where you can find it:
class
Planet
private
()
extends
MongoRecord
[
Planet
]
with
StringPk
[
Planet
]
{
override
def
meta
=
Planet
object
review
extends
StringField
(
this
,
1024
)
}
object
Planet
extends
Planet
with
MongoMetaRecord
[
Planet
]
{
override
def
collectionName
=
"example.planet"
}
class
Country
private
()
extends
MongoRecord
[
Country
]
with
StringPk
[
Country
]
{
override
def
meta
=
Country
object
planet
extends
StringRefField
(
this
,
Planet
,
128
)
}
object
Country
extends
Country
with
MongoMetaRecord
[
Country
]
{
override
def
collectionName
=
"example.country"
}
To make this example easier to follow, our model mixes in StringPk[Planet]
to use strings as the primary key on our documents, rather than the more usual MongoDB object IDs. Consequently, the link is established with a StringRefField
.
In a snippet we can make use of the planet
reference by resolving it with .obj
:
class
HelloWorld
{
val
uk
=
Country
.
find
(
"uk"
)
openOr
{
val
earth
=
Planet
.
createRecord
.
id
(
"earth"
).
review
(
"Harmless"
).
save
Country
.
createRecord
.
id
(
"uk"
).
planet
(
earth
.
id
.
is
).
save
}
def
facts
=
".country *"
#>
uk
.
id
&
".planet"
#>
uk
.
planet
.
obj
.
map
{
p
=>
".name *"
#>
p
.
id
&
".review *"
#>
p
.
review
}
}
For the value uk
, we look up an existing record, or create one if none
is found. We create earth
as a separate MongoDB record, and
then reference it in the planet
field with the ID of the planet.
Retrieving the reference is via the obj
method, which returns a
Box[Planet]
in this example.
Referenced records are fetched from MongoDB when you call the obj
method
on a MongoRefField
. You can see this by turning on logging in the
MongoDB driver. Do this by adding the following to the start of your
Boot.scala:
System
.
setProperty
(
"DEBUG.MONGO"
,
"true"
)
System
.
setProperty
(
"DB.TRACE"
,
"true"
)
Having done this, the first time you run the previous snippet, your console will include:
INFO: find: cookbook.example.country { "_id" : "uk"} INFO: update: cookbook.example.planet { "_id" : "earth"} { "_id" : "earth" , "review" : "Harmless"} INFO: update: cookbook.example.country { "_id" : "uk"} { "_id" : "uk" , "planet" : "earth"} INFO: find: cookbook.example.planet { "_id" : "earth"}
What you’re seeing here is the initial lookup for uk
, followed by the
creation of the earth
record and an update that is saving the uk
record. Finally, there is a lookup of earth
when uk.obj
is called in
the facts
method.
The obj
call will cache the planet
reference. That means you could
say:
".country *"
#>
uk
.
id
&
".planet *"
#>
uk
.
planet
.
obj
.
map
(
_
.
id
)
&
".review *"
#>
uk
.
planet
.
obj
.
map
(
_
.
review
)
and you’d still only see one query for the earth
record despite
calling obj
multiple times. The flip side of that is if the earth
record was updated elsewhere in MongoDB after you called obj
you would
not see the change from a call to uk.obj
unless you reloaded the uk
record first.
Searching for records by a reference is straightforward:
val
earth
:
Planet
=
...
val
onEarth
:
List
[
Country
]
=
Country
.
findAll
(
Country
.
planet
.
name
,
earth
.
id
.
is
)
Or in this case, because we have String
references, we could just say:
val
onEarth
:
List
[
Country
]
=
Country
.
findAll
(
Country
.
planet
.
name
,
"earth"
)
Updating a reference is as you’d expect:
uk
.
planet
.
obj
.
foreach
(
_
.
review
(
"Mostly harmless."
).
update
)
This would result in the changed field being set:
INFO: update: cookbook.example.planet { "_id" : "earth"} { "$set" : { "review" : "Mostly harmless."}}
A uk.planet.obj
call will now return a planet with the new review.
Or you could replace the reference with another:
uk
.
planet
(
Planet
.
createRecord
.
id
(
"mars"
).
save
.
id
.
is
).
save
Again, note that the reference is via the ID of the record (id.is
), not the record itself.
To remove the reference:
uk
.
planet
(
Empty
).
save
This removes the link, but the MongoDB record pointed to by the link will remain in the database. If you remove
the object being referenced, a later call to obj
will return an
Empty
box.
The example uses a StringRefField
, as the MongoDB records themselves use String
as the _id
. Other reference types are:
-
ObjectIdRefField
-
This is possibly the most frequently used kind of reference, when you want to reference via the usual default
ObjectId
in MongoDB. -
UUIDRefField
-
This is used for records with an ID based on
java.util.UUID
. -
StringRefField
-
This is used in this example, where you control the ID as a
String
. -
IntRefField
andLongRefField
- This is used when you have a numeric value as an ID.
10Gen, Inc.’s Data Modeling Decisions describes embedding of documents compared to referencing objects.
You want to use Foursquare’s type-safe domain specific language (DSL), Rogue, for querying and updating MongoDB records.
You need to include the Rogue dependency in your build and import Rogue into your code.
For the first step, edit build.sbt and add:
"com.foursquare"
%%
"rogue"
%
"1.1.8"
intransitive
()
In your code, run import com.foursquare.rogue._
and then start using Rogue. For example, using the Scala console (see Running Queries from the Scala Console):
scala
>
import
com.foursquare.rogue.Rogue._
import
com.foursquare.rogue.Rogue._
scala
>
import
code.model._
import
code.model._
scala
>
Country
.
where
(
_
.
id
eqs
"uk"
).
fetch
res1
:
List
[
code.model.Country
]
=
List
(
class
code
.
model
.
Country
={
_id
=
uk
,
population
=
Map
(
Brighton
->
134293
,
Liverpool
->
469017
,
Birmingham
->
970892
)})
scala
>
Country
.
where
(
_
.
id
eqs
"uk"
).
count
res2
:
Long
=
1
scala
>
Country
.
where
(
_
.
id
eqs
"uk"
).
modify
(
_
.
population
at
"Brighton"
inc
1
).
updateOne
()
Rogue is able to use information in your Lift record to offer an elegant way to query and update records. It’s type-safe, meaning, for example, if you try to use an Int
where a String
is expected in a query, MongoDB would allow that and fail to find results at runtime, but Rogue enables Scala to reject the query at compile time:
scala
>
Country
.
where
(
_
.
id
eqs
7
).
fetch
<
console
>:
20
:
error:
type
mismatch
;
found
:
Int
(
7
)
required:
String
Country
.
where
(
_
.
id
eqs
7
).
fetch
The DSL constructs a query that we then fetch
to send the query to MongoDB. That last method, fetch
, is just one of the ways to run the query. Others include:
-
count
- Queries MongoDB for the size of the result set
-
countDistinct
- Shows the number of distinct values in the results
-
exists
- True if there’s any record that matches the query
-
get
-
Returns an
Option[T]
from the query -
fetch(limit: Int)
-
Similar to
fetch
, but returns at mostlimit
results -
updateOne
,updateMulti
,upsertOne
, andupsertMulti
- Modify a single document, or all documents, that match the query
-
findAndDeleteOne
andbulkDelete_!!
- Delete records
The query language itself is expressive, and the best place to explore the variety of queries is in the QueryTest
specification in the source for Rogue. You’ll find a link to this in the README of the project on GitHub.
Note
Rogue is working towards a version 2 release that introduces a number of new concepts. If you want to give it a try, take a look at the instructions and comments on the Rogue mailing list.
For geospacial queries, see Storing Geospatial Values.
The README page for Rogue is a great starting point, and includes a link to QueryTest
giving plenty of example queries to crib.
The motivation for Rogue is described in a Foursquare engineering blog post.
Use Rogue’s LatLong
class to embed location information in your model. For
example, we can store the location of a city like this:
import
com.foursquare.rogue.Rogue._
import
com.foursquare.rogue.LatLong
class
City
private
()
extends
MongoRecord
[
City
]
with
ObjectIdPk
[
City
]
{
override
def
meta
=
City
object
name
extends
StringField
(
this
,
60
)
object
loc
extends
MongoCaseClassField
[
City
,LatLong
](
this
)
}
object
City
extends
City
with
MongoMetaRecord
[
City
]
{
import
net.liftweb.mongodb.BsonDSL._
ensureIndex
(
loc
.
name
->
"2d"
,
unique
=
true
)
override
def
collectionName
=
"example.city"
}
We can store values like this:
val
place
=
LatLong
(
50.819059
,
-
0.136642
)
val
city
=
City
.
createRecord
.
name
(
"Brighton, UK"
).
loc
(
pos
).
save
(
true
)
This will produce data in MongoDB that looks like this:
{
"_id"
:
ObjectId
(
"50f2f9d43004ad90bbc06b83"
),
"name"
:
"Brighton, UK"
,
"loc"
:
{
"lat"
:
50.819059
,
"long"
:
-
0.136642
}
}
MongoDB supports geospatial indexes, and we’re making use of this by doing two things. First, we are storing the location information in one of MongoDB’s permitted formats. The format is an embedded document containing the coordinates. We could also have used an array of two values to represent the point.
Second, we’re creating an index of type 2d
, which allows us to use MongoDB’s geospatial functions such as $near
and $within
. The unique=true
in the ensureIndex
highlights that you can control
whether locations needs to be unique (true
, no duplications) or not (false
).
With regard to the unique index, you’ll note that we’re calling save(true)
on the City
in
this example, rather than the plain save
in most other recipes. We could use save
here, and
it would work fine, but the difference is that save(true)
raises the write concern level
from “normal” to “safe.”
With the normal write concern, the call to save
would return as soon
as the request has gone down the wire to the MongoDB server. This gives a certain degree of reliability in that
save
would fail if the network had gone away. However, there’s no indication that the server has
processed the request. For example, if we tried to insert a city at the exact same location as one that was already in the database, the index uniqueness rule would be violated and the record would not be saved. With just save
(or save(false)
), our Lift application would not receive this error, and the call would fail silently. Raising the concern to “safe” causes save(true)
to wait for an acknowledgment from the MongoDB server, which means the application will receive exceptions for some kinds of errors.
As an example, if we tried to insert a duplicate city, our call to save(true)
would result in:
com
.
mongodb
.
MongoException$DuplicateKey
:
E11000
duplicate
key
error
index
:
cookbook.example.city.$loc_2d
There are other levels of write concern, available via another variant of save
that takes a WriteConcern
as an argument.
If you ever need to drop an index, the MongoDB command is:
db
.
example
.
city
.
dropIndex
(
"loc_2d"
)
The reason this recipe uses Rogue’s LatLong
class is to enable us to query using the Rogue DSL. Suppose we’ve inserted other cities into our collection:
>
db.example.city.find(
{}
,
{
_id:0
}
)
{
"name"
:
"London, UK"
,
"loc"
:
{
"lat"
:
51
.
5
,
"long"
:
-0
.
166667
}
}
{
"name"
:
"Brighton, UK"
,
"loc"
:
{
"lat"
:
50
.
819059
,
"long"
:
-0
.
136642
}
}
{
"name"
:
"Paris, France"
,
"loc"
:
{
"lat"
:
48
.
866667
,
"long"
:
2
.
333333
}
}
{
"name"
:
"Berlin, Germany"
,
"loc"
:
{
"lat"
:
52
.
533333
,
"long"
:
13
.
416667
}
}
{
"name"
:
"Sydney, Australia"
,
"loc"
:
{
"lat"
:
-33
.
867387
,
"long"
:
151
.
207629
}
}
{
"name"
:
"New York, USA"
,
"loc"
:
{
"lat"
:
40
.
714623
,
"long"
:
-74
.
006605
}
}
We can now find those cities within 500 kilometers of London:
import
com.foursquare.rogue.
{
LatLong
,
Degrees
}
val
centre
=
LatLong
(
51.5
,
-
0.166667
)
val
radius
=
Degrees
(
(
500
/
6378.137
).
toDegrees
)
val
nearby
=
City
.
where
(
_
.
loc
near
(
centre
.
lat
,
centre
.
long
,
radius
)
).
fetch
()
This would query MongoDB with this clause:
{
"loc"
:
{
"$near"
:
[
51
.
5
,
-0
.
166667
,
4
.
491576420597608
]}}
which will identify London, Brighton, and Paris as near to London.
The form of the query is a centre point and a spherical radius. Records falling
inside that radius match the query and are returned closest first. We calculate
the radius in radians: 500 km divided by the radius of the Earth, approximately 6,378 km, gives
us an angle in radians. We convert this to Degrees
as required by Rogue.
The MongoDB Manual discusses geospatial indexes.
Learn more about write concerns from the MongoDB Manual.
Start the console from your project, call boot()
, and then interact with your model.
For example, using the MongoDB records developed as part of Connecting to a MongoDB Database, we can perform a basic query:
$ sbt ... > console [info] Compiling 1 Scala source to /cookbook_mongo/target/scala-2.9.1/classes... [info] Starting scala interpreter... [info] Welcome to Scala version 2.9.1.final ... Type in expressions to have them evaluated. Type :help for more information. scala> import bootstrap.liftweb._ import bootstrap.liftweb._ scala> new Boot().boot scala> import code.model._ import code.model._ scala> Country.findAll res2: List[code.model.Country] = List(class code.model.Country={_id=uk, population=Map(Brighton -> 134293, Liverpool -> 469017, Birmingham -> 970892)}) scala> :q
Running everything in Boot
may be a little heavy handed, especially if you are starting up various services and background tasks. All we need to do is define a database connection. For example, using the sample code presented in Connecting to a MongoDB Database, we could initialise a conection with:
scala> import bootstrap.liftweb._ import bootstrap.liftweb._ scala> import net.liftweb.mongodb._ import net.liftweb.mongodb._ scala> MongoUrl.defineDb(DefaultMongoIdentifier, "mongodb://127.0.0.1:27017/cookbook") scala> Country.findAll res2: List[code.model.Country] = List(class code.model.Country={_id=uk, population=Map(Brighton -> 134293, Liverpool -> 469017, Birmingham -> 970892)})
Connecting to a MongoDB Database explains connecting to MongoDB and Using Rogue describes querying with Rogue.
Using the Specs2 testing framework, surround your specification with a context that creates and connects to a database for each test and destroys it after the test runs.
First, create a Scala trait to set up and destroy a connection to MongoDB. We’ll be mixing this trait into our specifications:
import
net.liftweb.http.
{
Req
,
S
,
LiftSession
}
import
net.liftweb.util.StringHelpers
import
net.liftweb.common.Empty
import
net.liftweb.mongodb._
import
com.mongodb.ServerAddress
import
com.mongodb.Mongo
import
org.specs2.mutable.Around
import
org.specs2.execute.Result
trait
MongoTestKit
{
val
server
=
new
Mongo
(
new
ServerAddress
(
"127.0.0.1"
,
27017
))
def
dbName
=
"test_"
+
this
.
getClass
.
getName
.
replace
(
"."
,
"_"
)
.
toLowerCase
def
initDb
()
:
Unit
=
MongoDB
.
defineDb
(
DefaultMongoIdentifier
,
server
,
dbName
)
def
destroyDb
()
:
Unit
=
{
MongoDB
.
use
(
DefaultMongoIdentifier
)
{
d
=>
d
.
dropDatabase
()
}
MongoDB
.
close
}
trait
TestLiftSession
{
def
session
=
new
LiftSession
(
""
,
StringHelpers
.
randomString
(
20
),
Empty
)
def
inSession
[
T
](
a
:
=>
T
)
:
T
=
S
.
init
(
Req
.
nil
,
session
)
{
a
}
}
object
MongoContext
extends
Around
with
TestLiftSession
{
def
around
[
T
<%
Result
](
testToRun
:
=>
T
)
=
{
initDb
()
try
{
inSession
{
testToRun
}
}
finally
{
destroyDb
()
}
}
}
}
This trait provides the plumbing for connection to a MongoDB server running locally, and creates a database based on the name of the class it is mixed into. The important part is the MongoContext
, which ensures that around
your specification the database is initialised, and that after your specification is run, it is cleaned up.
To use this in a specification, mix in the trait and then add the context:
import
org.specs2.mutable._
class
MySpec
extends
Specification
with
MongoTestKit
{
sequential
"My Record"
should
{
"be able to create records"
in
MongoContext
{
val
r
=
MyRecord
.
createRecord
// ...your useful test here...
r
.
valueBox
.
isDefined
must
beTrue
}
}
}
You can now run the test in SBT by typing test
:
> test [info] Compiling 1 Scala source to target/scala-2.9.1/test-classes... [info] My Record should [info] + be able to create records [info] [info] [info] Total for specification MySpec [info] Finished in 1 second, 199 ms [info] 1 example, 0 failure, 0 error [info] [info] Passed: : Total 1, Failed 0, Errors 0, Passed 0, Skipped 0 [success] Total time: 1 s, completed 03-Jan-2013 22:47:54
Lift normally provides all the scaffolding you need to connect and run against MongoDB. Without a running Lift application, we need to ensure MongoDB is configured when our tests run outside of Lift, and that’s what the MongoTestKit
trait is providing for us.
The one unusual part of the test setup is including a TestLiftSession
. This provides an empty session around your test, which is useful if you are accessing or testing state-related code (e.g., access to S
). It’s not strictly necessary for running tests against Record, but it has been included here because you may want to do that at some point, for example if you are testing user login via MongoDB records.
There are a few nice tricks in SBT to help you run tests. Running test
will run all the tests in your project. If you want to focus on just one test, you can:
> test-only org.example.code.MySpec
This command also supports wildcards, so if we only wanted to run tests that start with the word “Mongo” we could:
> test-only org.example.code.Mongo*
There’s also test-quick
(in SBT 0.12), which will only run tests that have not been run, have changed, or failed last time, and ~test
to watch for changes in tests and run them.
test-only
together with modifications to around
in MongoTestKit
can be a good way to track down any issues you have with a test. By disabling the call to destroyDb()
, you can jump into the MongoDB shell and examine the state of the database after a test has run.
Around each test, we’ve simply deleted the database so the next time we try to use it, it’ll be empty. In some situations, you may not be able to do this. For example, if you’re running tests against a database hosted with companies such as MongoLabs or MongoHQ, then deleting the database will mean you won’t be able to connect to it next time you run.
One way to resolve that is to clean up each individual collection, by defining the collections you need to clean up and replacing destroyDb
with a method that will remove all entries in those collections:
lazy
val
collections
:
List
[
MongoMetaRecord
[
_
]]
=
List
(
MyRecord
)
def
destroyDb
()
:
Unit
=
{
collections
.
foreach
(
_
bulkDelete_!!
new
BasicDBObject
)
MongoDB
.
close
}
Note that the collection list is lazy
to avoid start up of the Record system before we’ve initialised our database connections.
If your tests are modifying data and have the potential to interact, you’ll want to stop SBT from running your tests in parallel. A symptom of this would be tests that fail apparently randomly, or working tests that stop working when you add a new test, or tests that seem to lock up. Disable by adding the following to build.sbt:
parallelExecution
in
Test
:=
false
You’ll notice that the example specification includes the line: sequential
. This disables the default behaviour in Specs2 of running all tests concurrently.
IntelliJ IDEA detects and allows you to run Specs2 tests automatically. With Eclipse, you’ll need to include the JUnit runner annotation at the start of your specification:
import
org.junit.runner.RunWith
import
org.specs2.runner.JUnitRunner
@RunWith
(
classOf
[
JUnitRunner
])
class
MySpec
extends
Specification
with
MongoTestKit
{
...
You can then “Run As…” the class in Eclipse.
The Specs2 site contains examples and a user guide.
If you prefer to use the Scala Test framework, take a look at Tim Nelson’s Mongo Auth Lift module. It includes tests using that framework. Much of what Tim has written there has been adapted to produce this recipe for Specs2.
The Lift MongoDB Record library includes a variation on testing with Specs2, using just Before
and After
rather than the around
example used in this recipe.
Flapdoodle provides a way to automate the download, install, setup, and cleanup of a MongoDB database. This automation is something you can wrap around your unit tests, and a Specs2 integration is included using the same Before
and After
approach to testing used by Lift MongoDB Record.
The test interface provided by SBT, such as the test
command, also supports the ability to fork tests, set specific configurations for test cases, and ways to select which tests are run.
The Lift wiki describes more about unit testing and Lift sessions.
Get Lift Cookbook now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.