Moving!

“I sit beside the fire and think
Of all that I have seen
Of meadow flowers and butterflies
In summers that have been

Of yellow leaves and gossamer
In autumns that there were
With morning mist and silver sun
And wind upon my hair

I sit beside the fire and think
Of how the world will be
When winter comes without a spring
That I shall ever see

For still there are so many things
That I have never seen
In every wood in every spring
There is a different green

I sit beside the fire and think
Of people long ago
And people that will see a world
That I shall never know

But all the while I sit and think
Of times there were before
I listen for returning feet
And voices at the door”

— J.R.R. Tolkien

Failtales is shifting gears🙂 I am moving (most) of the content to a different site, changing the format, changing the technology, and giving it a new name. I do not do this because I dislike my current blogging service – quite the contrary, in fact.

However, I have always been someone who welcomes change, especially in technology. If you’re not changing, you’re not learning. So without further ado – adios!

Y2N1Z2djZjovL3VybmNmY2VubC52Yg==

Metasploit soul-searching: scanning with metasploit

I’ve been trying to expand my knowledge of Metasploit recently. I’ve gotten training which included quite extensive coverage of the framework, for which I’m grateful; but to really get how extensive the tool’s functionality is, there’s nothing quite like practice.

With this in mind, I downloaded the metasploitable VM over at sourceforge (http://sourceforge.net/projects/metasploitable/files/Metasploitable2/) and began hacking away at it.

When you start working on these things, it’s awful tempting to fall back into ‘CTF mode’, where your only objective is to get in. It’s not super effective in testing the tool, though. So I tried to limit myself to metasploit only, and see how far I could go. I also took a breadth-first approach rather than a depth-first approach – in other words, exploring as many of the scanning functionality as possible before moving on to exploitation.

My first task was to scan the machine. This is the focus of this article. I used the metasploit database functionalities to catalog what I found. If you’ve never tried this before, I’d highly recommend you check out this article:
http://www.offensive-security.com/metasploit-unleashed/Using_the_Database

I began by performing a version and script scan of the machine using nmap and importing the results into the metasploit database. One can perform the scan within metasploit, so I see the use of nmap as fairplay. The version and script scan gets you all sorts of juicy information, which you can see using the ‘services‘ command.

Crawling web pages

Metasploitable 2 has web applications running on it. One thing that constantly worries me during a pentest is whether I’m going to miss any pages, so at some point I’ll try to have at least two spiders index the site (in addition to any general statistics I might get by searching for the site on Google during recon). Here’s a nifty auxiliary I found for this in metasploit:

auxiliary/crawler/msfcrawler

You should note, however, that the module is a bit of a pain with regards to its output: every URI gets printed to screen, and it blocks metasploit while it’s running. Not ideal. So I’d recommend spawning an additional msfconsole session, running the command on that instance, and having it spool its output to a file. For more information about spooling, check out:

https://community.rapid7.com/community/metasploit/blog/2011/06/25/metasploit-framework-console-output-spooling

Working with MySQL

Metasploit also has an exposed MySQL instance, which is password protected. Here’s the name of the mysql bruteforcing auxiliary:

auxiliary/scanner/mysql/mysql_login

A few things to note here: first, the tool is flexible enough to allow you to specify files for users and passwords, provide specific values of either the user name or password, and/or indicate whether you want to use the creds in your metasploit database. Very cool. Second, it’s fairly slow. Don’t expect immediate results. Third, it can be as verbose — if not more — as the crawler module. I’d recommend turning off verbose mode.

Working with Tomcat

Similar to MySQL’s login auxiliary, you also have Tomcat’s auxiliary:

auxiliary/scanner/http/tomcat_mgr_login

In the case of metasploitable, this worked really nicely for me. Also, when using the database, any successfully found creds get stored in the database for re-use! You can see them by issuing the ‘creds‘ command. Swanky.

Don’t forget that metasploit does come with a bunch of wordlists, which you’ll find in the data directory. Wordlists are segmented by type of service (such as http users, unix users, directory names…)

Other tools

There is a lot of functionality in metasploit; the more I look, the more I find. I typically do my searches from msf console using searches like:

grep keyword2 search keyword1

Multiple keywords searches and searching by type do not work for me. I’m probably doing something wrong, feel free to pipe in if you’ve got a solution to this. However, if you’re looking for metasploit goodness, it turns out that rapid7 has indexed of the out-of-the-box functionality offered by metasploit:

http://www.rapid7.com/db/search

tl;dr (too long; didn’t read)

nmap -sS -sV -sC metasploitable.host.local -oA metasploitable_scan

In metasploit:

workspaces -a metasploitable
db_import ~/scans/metasploitable_scan.xml
services
use auxiliary/crawler/msfcrawler
spool /tmp/crawler.log
setg RHOSTS metasploitable.host.local
exploit
use auxiliary/scanner/mysql/mysql_login
setg USER_FILE xxx
setg PASS_FILE yyy
set VERBOSE false
exploit
use auxiliary/scanner/http/tomcat_mgr_login
exploit

Replacing your user agent with a Twisted proxy

A couple of days ago, a friend sent me a link to Google’s “killer-robots.txt” easter-egg. I was curious to see if there was more to it than just the file (a tribute to both the robots.txt file and to the Terminator movies), and started trying to mess around with the user agent I was sending to google.com. I had started by using Burp Suite, which allows you to intercept and alter your requests, but it was a drag having to repeatedly intercept and alter the User-Agent string; consequently, I ended up writing the python code below.

It’s the first time I use Twisted to do anything – I have to say, I like its simplicity.

Like the header says, I did end up using a Firefox plugin to change the user agent; but I thought that the code would be useful to have around for those of you that might wish to intercept and alter a whole bunch of packets. I use regex to identify the user-agent string and replace it with whatever I want – this method grants you quite a bit of flexibility in finding and replacing elements of your request.

Enjoy!


# Written by redparanoid (@Inf0Junki3), Friday 11.07.2014

# ReplacingProxy: A proxy that takes incoming requests, substitutes the user agent with another
# value, and passes the request on. I wrote this script because I was curious about the
# killer-robots.txt easter egg on google, and wanted to see if fiddling with the user-agent would
# change the content of the pages. Sadly, it doesn't.
# Note that I ended up using Firefox's User Agent Switcher to force the agent, because all google
# traffic is over SSL and I didn't want to mess around with HTTPS. I shall someday fix this, when
# I am bored again. In the meantime, this is still a neat proof of concept🙂

# References:
#   https://wiki.python.org/moin/Twisted-Examples
#   http://stackoverflow.com/questions/9063583/python-twisted-proxy-how-to-intercept-packets

from   twisted.web         import proxy, http
from   twisted.internet    import reactor
from   twisted.python      import log
import sys
import re
from   termcolor import colored

#log to the standard output.
log.startLogging(sys.stdout)

class ReplacingProxy(proxy.Proxy):
  """
  The class used to intercept the data received and print it to the std output.
  """
  def dataReceived(self, data):

  match = re.compile(r"User-Agent: .*$", re.MULTILINE)

  new_user_agent = "User-Agent: T-1000"

  if "User-Agent: " in data:
    old_user_agent = match.search(data).group(0)
    print colored(
        "Substituting %s with %s" % (old_user_agent, new_user_agent),
        "blue")
    new_data = match.sub(new_user_agent, data)
    print colored(
        "OUTPUT: %s" % new_data,
        "green")

    return proxy.Proxy.dataReceived(self, new_data)
  #end def dataReceived
#end class

class ProxyFactory(http.HTTPFactory):
  """
  The factory class for the proxy.
  """
  protocol = ReplacingProxy
#end class

reactor.listenTCP(8080, ProxyFactory())
reactor.run()

WebSockets

WebSockets are a mechanism that allow a client (typically a web page) to talk to a server without the overhead and complications that web services may pose. The client first establishes a connection using http and then makes a request to switch over to websockets; the process is described in RFC 6455. Using this technology simplifies development of elaborate web based clients and reduces web traffic, which is pretty sweet for developed and admins alike

Unfortunately, not properly securing websockets is pretty sweet for attackers, and can lead to information leakage or, in extreme cases, code execution on the server. If you’re used to setting up authentication using a framework and letting things rub on their own, you will have to rethink your strategy; and if you plan on doing something like piping your users’ input to another library or executable, be particularly careful about sanitizing it! While the way to a man’s heart is through his stomach, the way to a server’s heart is through user inputs.

VNC passwords

We like to think of VNC passwords as encrypted; but when you consider that they’re encrypted using DES (a weak encryption algorithm) with a key that is hardcoded… Well… That pretty much makes VNC passwords encoded and not encrypted. There are a few VNC password revealers out there, such as vncpwd or VNCPassView, the former can be used in Linux and the latter in Windows. A prerequisite to using these is that you have access to the VNC passwd file and/or registry. Other tools exist to snarf the VNC password out of network captures.

Automating your cloud infrastructure, part one: automating server deployment with pyrax, unittest and mock

Automating server deployments with python - because even sysadmins need their sleep... Image CC license from openclipart user hector-gomez

Image CC license from openclipart user hector-gomez

I’ve been tinkering with cloud infrastructure a lot in the past couple of years. I mostly administer my servers by hand, but recently I’ve tasked myself with migrating a dev environment of a half dozen servers – I figured it would be a good opportunity to roll up my sleeves and do some writing on the topic of cloud infrastructure automation. In this first post on the topic, I shall provide a possible approach to automating server deployment (as well as unit-testing your scripts) using Rackspace Cloud’s API; I’ll eventually get round to other topics such as automating your deployments and configuration with Puppet, and setting up monitoring and intrusion prevention — but for now, my focus will be on automating server deployments on the cloud.

Your mission, should you choose to accept it…

Imagine that you are a sysadmin tasked with deploying several servers on cloud infrastructure. There was a time when you had to painstakingly configure and deploy each server manually… A boring, highly repetitive, and error-inducing task. These annoyances are rapidly overcome nowadays: advances in virtualization and cloud computing have made automating server deployments a breeze.

Before we begin, though, here are a couple of assumptions: I assume that you are working with Rackspace Cloud and are familiar with its services. I also assume that you’re familiar with the concept of ghosting, i.e. creating a pre-configured, base template of a typical server that would be deployed in your infrastructure. Configuration of the base template is not in the scope of this post; I might cover some basics in the near-future when broaching the subject of configuration management with Puppet, though.

I also assume that you are familiar with software development concepts such as test-driven development, code versioning, and software design patterns. I’ve tried to provide links where it makes sense; if something’s not easy to follow, feel free to comment and I’ll answer/amend accordingly.

Introducing the tools

Infrastructure automation is a subject mainly discussed in sysadmin circles; however, the tools that I use in my approach come largely from my programming / testing toolkit. I’m an advocate of Test-Driven Development, and I see no reason why the same cannot be applied to systems administration.

My entire approach is based on the assumption that you are comfortable with Python. I’ve set myself up with PyDev for this task; PyDev is a python editor based on the excellent open-source Eclipse IDE. The benefits of using PyDev over notepad, notepad++ or gedit  are that 1) you get syntax highlighting and code completion, 2) the refactoring plug-ins are sweet, and 3) you can manage and run your unit tests from the IDE. I realize and respect that there are a lot of vi / nano / emacs purists out there – I used to be one. If you’re happier using a nice, clean editor like that, cool! Doesn’t change my approach.

But I digress. Rackspace Cloud has a ReST API that allows you to perform (almost) all the tasks you can perform from the admin dashboard. You can do things like create servers, isolated networks, list server images… The panoply of functionalities is documented on http://docs.rackspace.com/. If you’re familiar with python’s urllib library, you can implement your own library with a little work; however, I would recommend using pyrax instead. The library is easy to use, well-documented and only a pip/pypi install away. I’ll be using this library in my sample source code.

As mentioned before, I’m keen on TDD; when developing my deployment script, I begin by writing tests that are bound to fail when they are first run, then implement the code that will make them succeed. This way I can catch a lot of silly errors before I launch the script in my production environment and I can make sure that changes I make as I go along don’t break the scripts. I use the unittest and mock libraries to achieve this purpose. I don’t go as far as to check code coverage, though I may do so eventually for larger scripts.

Setting up the project

I recommend setting up a basic environment so that you can comfortably write scripts for your infrastructure. If you administer several infrastructures, I urge you to have one environment per infrastructure so as to avoid any accidental deployments (or deletions!).

Your entire environment should be contained in a single folder as a package. I’d recommend setting up code versioning with a tool like git to manage code changes but also branches – for instance, you could easily maintain deployment scripts for several infrastructures that way.

Here’s what your environment should look like; I’ve called the root directory of the environment my_rackspace_toolkit – I provide explanations for each component below:

my_rackspace_toolkit [dir]
|
+--> rackspace_context.py
|
+--> rackspace_shell.py
|
+--> category [dir]
     |
     +--> deployment_script.py
     |
     +--> tests [dir]
          |
          +--> deployment_script_tests.py

rackspace_context.py

This contains a single class, RackspaceContext. This allows you to supply your scripts with contextual variables for calling pyrax objects. Here’s an example implementation:

import pyrax as pyrax_lib
import keyring as keyring_lib

class RackspaceContext(object):

    #Set up alias to the pyrax library:
    pyrax = pyrax_lib
    keyring = keyring_lib

    def __init__(self):
        # Set up authentication
        self.pyrax.set_setting("identity_type", "rackspace")
        self.keyring.set_password("pyrax", "my_username", "my_api_token")
        self.pyrax.keyring_auth("my_username")

        # Set up aliases
        self.cs     = self.pyrax.cloudservers
        self.cnw    = self.pyrax.cloud_networks

As the name indicates, RackspaceContext is a typical implementation of the Context design pattern. There are several benefits to this:

  1. With a Context class, you can consistently set up authentication throughout all your deployment scripts. If your API token changes, you only have one file to worry about.
  2. If you want to re-deploy your environment for multiple rackspace accounts, you need only change the context and you’re good to go.
  3. If done right, your deployment scripts don’t need to worry about authentication – they just need to consume the context class.
  4. This makes testing your scripts insanely simple. We’ll see why in a moment.

rackspace_shell.py

The rackspace shell is a command-line interface that pre-loads the context and any scripts that you’ve written so that you can execute them easily. Here’s an example:

#!/usr/bin/env python

from pyrax_context import PyraxContext
context = PyraxContext()

# Import the CreateDevEnvironment script so it can easily be called from the shell.
from dev.create_dev_environment import CreateDevEnvironment

print """
Pyrax Interactive Shell - preloaded with the rackspace context.

When running your scripts, please make calls using the context object.

For instance:

script = CreateDevEnvironment()
result = script.actuate(context)

print result
"""

# Drop to the shell:
import code
code.interact("Rackspace shell", local=locals())

Note that if you’re writing deployment scripts for use in a CI environment like Jenkins, you may wish to adapt this file to make it either interactive or non-interactive, perhaps by using a flag. I’ve found that it’s a useful thing to have in any case.

category directory

You are likely to have several types of deployment scripts; I recommend that you divide them by category using packages. For instance, why not have a dev package for the development servers? Why not separate creation scripts from deletion scripts? How you separate your scripts, whether by functionality or by server type, is up to you; I’ve found that some categorization is essential, particularly because you may find yourself executing many of these scripts at a time and you need a way to do this in a logical manner. Make sure the each of your directories has an __init__.py file, making it a package.

deployment_script.py

Each deployment script should be a file containing a class that will be called from your shell. For instance:

class CreateDevEnvironment(object):
    """
    Script object used to perform the necessary initialization tasks.
    To use, instantiate the class and call the actuate() method.
    """
    # Set up list of machines
    MACHINES = ["machine-1",
                "machine-2",
               ]

    def actuate(self, pyrax_context):
        """
        Actually performs the task of creating the machines.
        """
	try:
		# Get the flavors of distribution
		flavors = [flavor 
		           for flavor in pyrax_context.cs.flavors.list()
		           if flavor.name == u"512MB Standard Instance"]
		flavor_exists = len(flavors) == 1

		# Get the networks
		networks = [network 
		            for network in pyrax_context.cnw.list() 
		            if network.label == "MyPrivateNetwork"]
		network_exists = len(networks) == 1
		network_list = []
		for network in networks:
		    network_list += network.get_server_networks()

		# Get the images
		images = [image
		          for image in pyrax_context.cs.images.list()
		          if image.name == "my_image"]
		image_exists = len(images) == 1

		if (flavor_exists and network_exists and image_exists):
		    for machine_name in self.MACHINES:
		        pyrax_context.cs.servers.create(machine_name, images[0].id, flavors[0].id, nics = network_list)
		return "Creation of machines is successful."
	except Exception as e:
		return "An exception has occurred! Details: ", e.message

The class contains a single method, actuate, which carries out your infrastructure deployment tasks – in this case, it is the creation of two machines based on a previously created image, using the standard 512 MB flavor of server.

deployment_script_tests.py

This is the file containing your unit tests. You can write your tests using unittest, pyunit or nose; I’ve written mine with unittest, and I use mocks to provide my tests with fake versions of pyrax objects. The goal is to verify that the script calls the correct function with the appropriate parameters, not to actually carry out the call. Once again, here’s an example of how this can be done:

import unittest
from rackspace_context import RackspaceContext as RackspaceContextClass
from mock import Mock
from dev.create_dev_environment import CreateDevEnvironment
from collections import namedtuple

Flavor = namedtuple("Flavor", "id name")
Network = namedtuple("Network", "id label")
Network.get_server_networks = Mock(return_value = [{'net-id': u'some-guid'}])
Image = namedtuple("Image", "id name")

class CreateDevEnvironmentTests(unittest.TestCase):

    RackspaceContext = RackspaceContextClass

    def setUp(self):
        self.RackspaceContext.pyrax = Mock()

        self.RackspaceContext.pyrax.cloudservers.flavors.list = Mock(return_value = [
                                                                                 Flavor(id = u'2', name = u'512MB Standard Instance')
                                                                                 ])

        self.RackspaceContext.pyrax.cloud_networks.list = Mock(return_value = [
                                                                           Network(id = u'1', label = u'MyPrivateNetwork')
                                                                           ])

        self.RackspaceContext.pyrax.cloudservers.images.list = Mock(return_value = [
                                                                                Image(id = u'3', name = u'my_image')
                                                                                ])

        self.context = self.RackspaceContext() 

    def tearDown(self):
        pass

    def testActuate(self):
        create_script = CreateDevEnvironment()
        create_script.actuate(self.context)
        # The script should first check that the 512 standard server flavor exists.
        self.assertTrue(self.context.pyrax.cloudservers.flavors.list.called, "cloudservers flavors list method was not called!")

        # The script should then check that the DevNet isolated network exists.
        self.assertTrue(self.context.pyrax.cloud_networks.list.called, "cloudservers networks list method was not called!")

        # The script should also check that the image it is going to use exists
        self.assertTrue(self.context.pyrax.cloudservers.images.list.called)

        # Finally, the script should call the create method for each of the machines in the script:
        for args in self.context.pyrax.cloudservers.servers.create.call_args_list:
            machine_name, image_id, flavor_id = args[0]
            nic_list = args[1]["nics"]
            self.assertTrue (machine_name in CreateDevEnvironment.MACHINES)
            self.assertTrue(image_id == u'3')
            self.assertTrue(flavor_id == u'2')
            self.assertTrue(nic_list == [{'net-id': u'some-guid'}])

if __name__ == "__main__":
    unittest.main()

Notice how I’m setting up mocks for each method that expects parameters back – this allows me to do back-to-back testing on my scripts so that I’m sure that I know how the script will be calling the pyrax libraries. While this doesn’t prevent you from making mistakes based on misunderstanding of how pyrax is used, it does prevent you from doing things like accidentally inverting an image id from a flavor id!

Conclusions

Using this methodology, you should be able to easily develop and test scripts that you can use to mass deploy and configure rackspace cloud servers. Initial setup of your environment using this approach should take no more than half an hour; once your environment is set up, you should be able to whittle out scripts easily and, more importantly, make use of this nifty little test harness so that you avoid costly accidents!

Suggestions and constructive criticism are welcome; I’m particularly interested if you have seen better approaches to automation, or if you know any other nifty tools. I’d also be interested in finding out if anyone out there has real-world experience using pyrax with Jenkins or Bamboo, and/or integrated this type of scripting with WebDriver scripts.

In my next post, I’ll be discussing Puppet. Now that automating server deployments is no longer a secret to you, how do you get your machines to automatically download packages, set up software, properly configure the firewall et cetera? I’ll attempt to address this and more shortly.

Automatically retrieve list of offline shares on all PCs of a domain

I wrote this short VBS script today to help out a client; basically, you can run this on an Active Directory domain as a login script to see if your users’ offline shares are correctly configured. In this case, each user is supposed to have a ‘U:’ drive that syncs with a file server whenever they’re on campus, and is available whenever they’re on the road. Sometimes, though, the configuration isn’t set for one reason or another… Hence the script.

Enjoy🙂

‘ u_sync_check: VBscript that grabs the information on offline files and saves it for later auditing.
‘ Written by Rick El-Darwish, 14.05.2013. Inspired by these sites: http://bit.ly/13YTyur and http://bit.ly/17q8Dcx

‘ Grab the name of the computer executing this script:
Set wshShell = WScript.CreateObject( “WScript.Shell” )
strComputerName = wshShell.ExpandEnvironmentStrings( “%COMPUTERNAME%” )

‘ Compose the path of the file under which this information will be stored. TODO: replace the UNC below with a valid path!
outFile=”\myservermyshare” & strComputerName & “.txt”

‘ Create the actual file.
Set objFSO=CreateObject(“Scripting.FileSystemObject”)
Set objFile = objFSO.CreateTextFile(outFile,True)

‘ Create the WMI object and grab the list of all synched directories.
strComputer = “.”
Set objWMIService = GetObject(“winmgmts:\” & strComputer & “rootCIMV2”)

Set colItems = objWMIService.ExecQuery(“SELECT * FROM Win32_OfflineFilesItem WHERE ItemType=2”,,48)

‘ For each synched directory, write an entry in the file
For Each objItem In colItems
objFile.Write “ItemPath : ” & objItem.ItemPath
Next

‘ Close the file
objFile.Close

Sharepoint auditing – a few thoughts

I have a colleague that’s been updating a sharepoint permissions matrix lately. It’s a good practice (I’d go as far as to say that it’s a must) to maintain such a matrix, in a format that is understandable to non-technical folk. It’s good for IT departments, who need to periodically check that people have access to the right information. It’s good for auditors, who want to show that their clients are exercising due diligence in controlling their resources. And it’s good for staff, who need to know which of their peers has access to the company’s knowledge, information, and tools.

However, while prepping for her cross-checking work, she’d been lead to believe that there are no tools for collecting all of a user’s  permissions on sites and lists. Since I’ve heard this discourse before, I thought I should debunk this myth and write about a tool that the Sharepoint integrator can add to his or her arsenal, Sushi.

It’s a great little utility, which you can obtain and customize to your heart’s content here if you know how to write code: http://sushi.codeplex.com/

Basically, you download the binary (no need to even build the project from source! I understand that this is sometimes daunting to people) and, in a few clicks, you can get a report on what groups your users are a part of and what specific permissions were granted to the user. You can also list which sites or lists in a site collection do not inherit permissions, which helps you identify what you need to specifically audit.

The catch: if you’re looking for a tool that does all your work for you, prepare to be disappointed. This does all the footwork for you, makes it such that you don’t have to repeatedly go through every site and list in your instance and hit “list permissions” and “site permissions”. It’s up to you to provide a matrix that is comprehensible and readable by your non-technical audience.

Previously, I’d been messing with scripts to extract the data right from the source: the SQL database. I started massaging them into a few SSRS reports, but ran out of time and motivation. I still have the script around, somewhere. Frankly though, with a tool like Sushi out there, I’d be inclined to think that one is better off hacking a bit of code to allow admins to select multiple users and export the results as a XML file, JSON file, or even to an SQL database. Once that’s done, the raw information can be easily formatted with a tool like Qlikview.

 

CISPA – do you know what it is?

This one’s a short one, but it’s important. It concerns not just our family, friends and colleagues in the United States, even though they will be the ones most affected. This is about CISPA, the latest in a series of bills to strip people of their right to privacy.

CISPA will affect you. No matter where & who you are. Do you know why it’s important, and how it will change your world?

Regardless of which side you’re on, you owe it to yourself to know what it’s about:

Let’s consider this quote by Edmund Burke: “The only thing necessary for the triumph of evil is for good men to do nothing.”
I love and hate this quote. Love it because it is a powerful, simple sentence that conveys a strong message. Hate it because it’s so damn accurate.

Gritter and Sharepoint: integrate nifty notifications in your intranet!

I’m working with a client on building up a project management tool in Sharepoint; one thing that’s really piqued their interest is the concept of getting short flashes of information when their staff logs into the landing page. I think it’s a lovely idea, and I’m intend on giving them this functionality – but how, you might ask? There doesn’t seem to be any Sharepoint feature for doing that!

We tend to forget that Sharepoint is, above all, a web-based solution. This means that, with a little ingenuity (and sometimes a lot of sweat and blood), you can integrate some of the latest, coolest web features into your existing Sharepoint. Fortunately, notifications are not too complicated. In this short article, we’re going to walk through creating very cool notifications using Gritter, a jQuery-based “plugin”, with Sharepoint.

Step One: Create a Sandbox

This may be as simple as creating a new page in your Site Pages repository. I seriously recommend implementing a proof-of-concept rather than work on your production page… If you’re not familiar with these libraries, the last place you want to test things out is on your production work, as easy as these implementations may seem.

Apart from your page, your sandbox will need a few extra files. These you can either place in the Site Assets repository of your PoC portal, or in the Site Assets repo of your root. The latter has the benefit of being accessible to your entire audience (or at least I assume so, it will depend on your permissions). The files that you need are the latest minified version of jQuery, the latest version of Gritter, and the latest version of SPServices (double-check these pages for compatibility issues, of course – if Gritter or SPServices tell you that they won’t work with certain versions of jQuery, don’t use those versions…)

When downloading Gritter, you’ll notice that it is a zip file that has several folders and files. I recommend that you keep those in one single place in your Site Assets. I find it’s easiest to use Sharepoint Designer to do that.

Step Two: Add Your jQuery References

Now that you have a sandbox, you can start working with it. In case you’re wondering, this section is assuming that you’re working with Sharepoint Designer to do your work.

Open your sandbox page in SPD, editing it in Advanced Mode. Locate the PlaceHolderMain placeholder and add the references to your script files:

<!– Inserted by Rick – references to jQuery scripts –>

<!– These are the references to jQuery and SPServices. –>

<script type=”text/javascript” src=”../SiteAssets/jquery-1.8.3.min.js”></script>

<script type=”text/javascript” src=”../SiteAssets/jquery.SPServices-0.7.2.min.js”></script>

<!– These are the references to Gritter: –>

<link rel=”stylesheet” type=”text/css” href=”../SiteAssets/gritter/css/jquery.gritter.css” />

<script type=”text/javascript” src=”../SiteAssets/gritter/js/jquery.gritter.min.js”></script>

<!– End Rick insertions. –>

You can test that the libraries loaded correctly by firing up Chrome, navigating to your PoC page, and opening your Console (F12). In the Console, type the following:

$
$(document).SPServices
$.gritter

If either of these return ‘undefined’, review your references, make sure the files are uploaded in the correct location.

Step Three: Setting up your notifications!

OK, now we know that all the necessary libraries are loaded. Time to develop notifications.

Always develop incrementally, testing your code one chunk at a time. To that effect, here’s code that you should insert after the above script blocks:

<!– Here’s a test notification –>

<script type=”text/javascript”>

$(document).ready( function() {

$.gritter.add({

title: ‘This is a test notification’,

text: ‘This will fade after some time’

});

});

</script>

If that works, you know that Gritter is functional for static content! Now it’s time to pull the real notifications from a list — this is where SPServices comes in. Before we proceed, we need something to pull information from: create a custom list with a single title for your PoC, “Latest Activities” for instance. Then, you will call the GetListItems function using SPServices.

The following code replaces your test notification:

<!– Code for notifications –>

<script type=”text/javascript”>

//This function throws up the notification:

function notify(curTitle, curContent) {

$.gritter.add({

title: curTitle,

text: curContent

});

}

//This retrieves the latest item of your Latest Activities.

function getLastActivity() {

$(document).SPServices({

operation: “GetListItems”,

async: false,

listName: “Latest Activities”,

CAMLRowLimit: 1,

CAMLQuery: “<Query><OrderBy><FieldRef Name=’Created’ Ascending=’False’ /></OrderBy></Query>”,

CAMLViewFields: “<ViewFields><FieldRef Name=’Title’ /></ViewFields>”,

completefunc: function(xData, Status) {

$(xData.responseXML).SPFilterNode(“z:row”).each(function() {

notify(‘For your information…’, $(this).attr(“ows_Title”));

});

}

});

}

//On document load, throw up the notification:

$(document).ready( function() {

getLastActivity();

});

</script>

Et voilà — Gritter and Sharepoint notifications in a nutshell! Your page will load and, once loaded, will call the getLastActivity function. getLastActivity pulls the latest item from the Latest Activities list (we use the CAMLQuery parameter to order by create date, and the CAMLRowLimit parameter to only return one parameter), and use a callback function to call the notify() function. The notify function is what is responsible for rendering the Gritter notification.

Happy notifying!

Rick.