Moving!

“I sit beside the fire and think
Of all that I have seen
Of meadow flowers and butterflies
In summers that have been

Of yellow leaves and gossamer
In autumns that there were
With morning mist and silver sun
And wind upon my hair

I sit beside the fire and think
Of how the world will be
When winter comes without a spring
That I shall ever see

For still there are so many things
That I have never seen
In every wood in every spring
There is a different green

I sit beside the fire and think
Of people long ago
And people that will see a world
That I shall never know

But all the while I sit and think
Of times there were before
I listen for returning feet
And voices at the door”

— J.R.R. Tolkien

Failtales is shifting gears 🙂 I am moving (most) of the content to a different site, changing the format, changing the technology, and giving it a new name. I do not do this because I dislike my current blogging service – quite the contrary, in fact.

However, I have always been someone who welcomes change, especially in technology. If you’re not changing, you’re not learning. So without further ado – adios!

Y2N1Z2djZjovL3VybmNmY2VubC52Yg==

Metasploit soul-searching: scanning with metasploit

I’ve been trying to expand my knowledge of Metasploit recently. I’ve gotten training which included quite extensive coverage of the framework, for which I’m grateful; but to really get how extensive the tool’s functionality is, there’s nothing quite like practice.

With this in mind, I downloaded the metasploitable VM over at sourceforge (http://sourceforge.net/projects/metasploitable/files/Metasploitable2/) and began hacking away at it.

When you start working on these things, it’s awful tempting to fall back into ‘CTF mode’, where your only objective is to get in. It’s not super effective in testing the tool, though. So I tried to limit myself to metasploit only, and see how far I could go. I also took a breadth-first approach rather than a depth-first approach – in other words, exploring as many of the scanning functionality as possible before moving on to exploitation.

My first task was to scan the machine. This is the focus of this article. I used the metasploit database functionalities to catalog what I found. If you’ve never tried this before, I’d highly recommend you check out this article:
http://www.offensive-security.com/metasploit-unleashed/Using_the_Database

I began by performing a version and script scan of the machine using nmap and importing the results into the metasploit database. One can perform the scan within metasploit, so I see the use of nmap as fairplay. The version and script scan gets you all sorts of juicy information, which you can see using the ‘services‘ command.

Crawling web pages

Metasploitable 2 has web applications running on it. One thing that constantly worries me during a pentest is whether I’m going to miss any pages, so at some point I’ll try to have at least two spiders index the site (in addition to any general statistics I might get by searching for the site on Google during recon). Here’s a nifty auxiliary I found for this in metasploit:

auxiliary/crawler/msfcrawler

You should note, however, that the module is a bit of a pain with regards to its output: every URI gets printed to screen, and it blocks metasploit while it’s running. Not ideal. So I’d recommend spawning an additional msfconsole session, running the command on that instance, and having it spool its output to a file. For more information about spooling, check out:

https://community.rapid7.com/community/metasploit/blog/2011/06/25/metasploit-framework-console-output-spooling

Working with MySQL

Metasploit also has an exposed MySQL instance, which is password protected. Here’s the name of the mysql bruteforcing auxiliary:

auxiliary/scanner/mysql/mysql_login

A few things to note here: first, the tool is flexible enough to allow you to specify files for users and passwords, provide specific values of either the user name or password, and/or indicate whether you want to use the creds in your metasploit database. Very cool. Second, it’s fairly slow. Don’t expect immediate results. Third, it can be as verbose — if not more — as the crawler module. I’d recommend turning off verbose mode.

Working with Tomcat

Similar to MySQL’s login auxiliary, you also have Tomcat’s auxiliary:

auxiliary/scanner/http/tomcat_mgr_login

In the case of metasploitable, this worked really nicely for me. Also, when using the database, any successfully found creds get stored in the database for re-use! You can see them by issuing the ‘creds‘ command. Swanky.

Don’t forget that metasploit does come with a bunch of wordlists, which you’ll find in the data directory. Wordlists are segmented by type of service (such as http users, unix users, directory names…)

Other tools

There is a lot of functionality in metasploit; the more I look, the more I find. I typically do my searches from msf console using searches like:

grep keyword2 search keyword1

Multiple keywords searches and searching by type do not work for me. I’m probably doing something wrong, feel free to pipe in if you’ve got a solution to this. However, if you’re looking for metasploit goodness, it turns out that rapid7 has indexed of the out-of-the-box functionality offered by metasploit:

http://www.rapid7.com/db/search

tl;dr (too long; didn’t read)

nmap -sS -sV -sC metasploitable.host.local -oA metasploitable_scan

In metasploit:

workspaces -a metasploitable
db_import ~/scans/metasploitable_scan.xml
services
use auxiliary/crawler/msfcrawler
spool /tmp/crawler.log
setg RHOSTS metasploitable.host.local
exploit
use auxiliary/scanner/mysql/mysql_login
setg USER_FILE xxx
setg PASS_FILE yyy
set VERBOSE false
exploit
use auxiliary/scanner/http/tomcat_mgr_login
exploit

Automating your cloud infrastructure, part one: automating server deployment with pyrax, unittest and mock

Automating server deployments with python - because even sysadmins need their sleep... Image CC license from openclipart user hector-gomez

Image CC license from openclipart user hector-gomez

I’ve been tinkering with cloud infrastructure a lot in the past couple of years. I mostly administer my servers by hand, but recently I’ve tasked myself with migrating a dev environment of a half dozen servers – I figured it would be a good opportunity to roll up my sleeves and do some writing on the topic of cloud infrastructure automation. In this first post on the topic, I shall provide a possible approach to automating server deployment (as well as unit-testing your scripts) using Rackspace Cloud’s API; I’ll eventually get round to other topics such as automating your deployments and configuration with Puppet, and setting up monitoring and intrusion prevention — but for now, my focus will be on automating server deployments on the cloud.

Your mission, should you choose to accept it…

Imagine that you are a sysadmin tasked with deploying several servers on cloud infrastructure. There was a time when you had to painstakingly configure and deploy each server manually… A boring, highly repetitive, and error-inducing task. These annoyances are rapidly overcome nowadays: advances in virtualization and cloud computing have made automating server deployments a breeze.

Before we begin, though, here are a couple of assumptions: I assume that you are working with Rackspace Cloud and are familiar with its services. I also assume that you’re familiar with the concept of ghosting, i.e. creating a pre-configured, base template of a typical server that would be deployed in your infrastructure. Configuration of the base template is not in the scope of this post; I might cover some basics in the near-future when broaching the subject of configuration management with Puppet, though.

I also assume that you are familiar with software development concepts such as test-driven development, code versioning, and software design patterns. I’ve tried to provide links where it makes sense; if something’s not easy to follow, feel free to comment and I’ll answer/amend accordingly.

Introducing the tools

Infrastructure automation is a subject mainly discussed in sysadmin circles; however, the tools that I use in my approach come largely from my programming / testing toolkit. I’m an advocate of Test-Driven Development, and I see no reason why the same cannot be applied to systems administration.

My entire approach is based on the assumption that you are comfortable with Python. I’ve set myself up with PyDev for this task; PyDev is a python editor based on the excellent open-source Eclipse IDE. The benefits of using PyDev over notepad, notepad++ or gedit  are that 1) you get syntax highlighting and code completion, 2) the refactoring plug-ins are sweet, and 3) you can manage and run your unit tests from the IDE. I realize and respect that there are a lot of vi / nano / emacs purists out there – I used to be one. If you’re happier using a nice, clean editor like that, cool! Doesn’t change my approach.

But I digress. Rackspace Cloud has a ReST API that allows you to perform (almost) all the tasks you can perform from the admin dashboard. You can do things like create servers, isolated networks, list server images… The panoply of functionalities is documented on http://docs.rackspace.com/. If you’re familiar with python’s urllib library, you can implement your own library with a little work; however, I would recommend using pyrax instead. The library is easy to use, well-documented and only a pip/pypi install away. I’ll be using this library in my sample source code.

As mentioned before, I’m keen on TDD; when developing my deployment script, I begin by writing tests that are bound to fail when they are first run, then implement the code that will make them succeed. This way I can catch a lot of silly errors before I launch the script in my production environment and I can make sure that changes I make as I go along don’t break the scripts. I use the unittest and mock libraries to achieve this purpose. I don’t go as far as to check code coverage, though I may do so eventually for larger scripts.

Setting up the project

I recommend setting up a basic environment so that you can comfortably write scripts for your infrastructure. If you administer several infrastructures, I urge you to have one environment per infrastructure so as to avoid any accidental deployments (or deletions!).

Your entire environment should be contained in a single folder as a package. I’d recommend setting up code versioning with a tool like git to manage code changes but also branches – for instance, you could easily maintain deployment scripts for several infrastructures that way.

Here’s what your environment should look like; I’ve called the root directory of the environment my_rackspace_toolkit – I provide explanations for each component below:

my_rackspace_toolkit [dir]
|
+--> rackspace_context.py
|
+--> rackspace_shell.py
|
+--> category [dir]
     |
     +--> deployment_script.py
     |
     +--> tests [dir]
          |
          +--> deployment_script_tests.py

rackspace_context.py

This contains a single class, RackspaceContext. This allows you to supply your scripts with contextual variables for calling pyrax objects. Here’s an example implementation:

import pyrax as pyrax_lib
import keyring as keyring_lib

class RackspaceContext(object):

    #Set up alias to the pyrax library:
    pyrax = pyrax_lib
    keyring = keyring_lib

    def __init__(self):
        # Set up authentication
        self.pyrax.set_setting("identity_type", "rackspace")
        self.keyring.set_password("pyrax", "my_username", "my_api_token")
        self.pyrax.keyring_auth("my_username")

        # Set up aliases
        self.cs     = self.pyrax.cloudservers
        self.cnw    = self.pyrax.cloud_networks

As the name indicates, RackspaceContext is a typical implementation of the Context design pattern. There are several benefits to this:

  1. With a Context class, you can consistently set up authentication throughout all your deployment scripts. If your API token changes, you only have one file to worry about.
  2. If you want to re-deploy your environment for multiple rackspace accounts, you need only change the context and you’re good to go.
  3. If done right, your deployment scripts don’t need to worry about authentication – they just need to consume the context class.
  4. This makes testing your scripts insanely simple. We’ll see why in a moment.

rackspace_shell.py

The rackspace shell is a command-line interface that pre-loads the context and any scripts that you’ve written so that you can execute them easily. Here’s an example:

#!/usr/bin/env python

from pyrax_context import PyraxContext
context = PyraxContext()

# Import the CreateDevEnvironment script so it can easily be called from the shell.
from dev.create_dev_environment import CreateDevEnvironment

print """
Pyrax Interactive Shell - preloaded with the rackspace context.

When running your scripts, please make calls using the context object.

For instance:

script = CreateDevEnvironment()
result = script.actuate(context)

print result
"""

# Drop to the shell:
import code
code.interact("Rackspace shell", local=locals())

Note that if you’re writing deployment scripts for use in a CI environment like Jenkins, you may wish to adapt this file to make it either interactive or non-interactive, perhaps by using a flag. I’ve found that it’s a useful thing to have in any case.

category directory

You are likely to have several types of deployment scripts; I recommend that you divide them by category using packages. For instance, why not have a dev package for the development servers? Why not separate creation scripts from deletion scripts? How you separate your scripts, whether by functionality or by server type, is up to you; I’ve found that some categorization is essential, particularly because you may find yourself executing many of these scripts at a time and you need a way to do this in a logical manner. Make sure the each of your directories has an __init__.py file, making it a package.

deployment_script.py

Each deployment script should be a file containing a class that will be called from your shell. For instance:

class CreateDevEnvironment(object):
    """
    Script object used to perform the necessary initialization tasks.
    To use, instantiate the class and call the actuate() method.
    """
    # Set up list of machines
    MACHINES = ["machine-1",
                "machine-2",
               ]

    def actuate(self, pyrax_context):
        """
        Actually performs the task of creating the machines.
        """
	try:
		# Get the flavors of distribution
		flavors = [flavor 
		           for flavor in pyrax_context.cs.flavors.list()
		           if flavor.name == u"512MB Standard Instance"]
		flavor_exists = len(flavors) == 1

		# Get the networks
		networks = [network 
		            for network in pyrax_context.cnw.list() 
		            if network.label == "MyPrivateNetwork"]
		network_exists = len(networks) == 1
		network_list = []
		for network in networks:
		    network_list += network.get_server_networks()

		# Get the images
		images = [image
		          for image in pyrax_context.cs.images.list()
		          if image.name == "my_image"]
		image_exists = len(images) == 1

		if (flavor_exists and network_exists and image_exists):
		    for machine_name in self.MACHINES:
		        pyrax_context.cs.servers.create(machine_name, images[0].id, flavors[0].id, nics = network_list)
		return "Creation of machines is successful."
	except Exception as e:
		return "An exception has occurred! Details: ", e.message

The class contains a single method, actuate, which carries out your infrastructure deployment tasks – in this case, it is the creation of two machines based on a previously created image, using the standard 512 MB flavor of server.

deployment_script_tests.py

This is the file containing your unit tests. You can write your tests using unittest, pyunit or nose; I’ve written mine with unittest, and I use mocks to provide my tests with fake versions of pyrax objects. The goal is to verify that the script calls the correct function with the appropriate parameters, not to actually carry out the call. Once again, here’s an example of how this can be done:

import unittest
from rackspace_context import RackspaceContext as RackspaceContextClass
from mock import Mock
from dev.create_dev_environment import CreateDevEnvironment
from collections import namedtuple

Flavor = namedtuple("Flavor", "id name")
Network = namedtuple("Network", "id label")
Network.get_server_networks = Mock(return_value = [{'net-id': u'some-guid'}])
Image = namedtuple("Image", "id name")

class CreateDevEnvironmentTests(unittest.TestCase):

    RackspaceContext = RackspaceContextClass

    def setUp(self):
        self.RackspaceContext.pyrax = Mock()

        self.RackspaceContext.pyrax.cloudservers.flavors.list = Mock(return_value = [
                                                                                 Flavor(id = u'2', name = u'512MB Standard Instance')
                                                                                 ])

        self.RackspaceContext.pyrax.cloud_networks.list = Mock(return_value = [
                                                                           Network(id = u'1', label = u'MyPrivateNetwork')
                                                                           ])

        self.RackspaceContext.pyrax.cloudservers.images.list = Mock(return_value = [
                                                                                Image(id = u'3', name = u'my_image')
                                                                                ])

        self.context = self.RackspaceContext() 

    def tearDown(self):
        pass

    def testActuate(self):
        create_script = CreateDevEnvironment()
        create_script.actuate(self.context)
        # The script should first check that the 512 standard server flavor exists.
        self.assertTrue(self.context.pyrax.cloudservers.flavors.list.called, "cloudservers flavors list method was not called!")

        # The script should then check that the DevNet isolated network exists.
        self.assertTrue(self.context.pyrax.cloud_networks.list.called, "cloudservers networks list method was not called!")

        # The script should also check that the image it is going to use exists
        self.assertTrue(self.context.pyrax.cloudservers.images.list.called)

        # Finally, the script should call the create method for each of the machines in the script:
        for args in self.context.pyrax.cloudservers.servers.create.call_args_list:
            machine_name, image_id, flavor_id = args[0]
            nic_list = args[1]["nics"]
            self.assertTrue (machine_name in CreateDevEnvironment.MACHINES)
            self.assertTrue(image_id == u'3')
            self.assertTrue(flavor_id == u'2')
            self.assertTrue(nic_list == [{'net-id': u'some-guid'}])

if __name__ == "__main__":
    unittest.main()

Notice how I’m setting up mocks for each method that expects parameters back – this allows me to do back-to-back testing on my scripts so that I’m sure that I know how the script will be calling the pyrax libraries. While this doesn’t prevent you from making mistakes based on misunderstanding of how pyrax is used, it does prevent you from doing things like accidentally inverting an image id from a flavor id!

Conclusions

Using this methodology, you should be able to easily develop and test scripts that you can use to mass deploy and configure rackspace cloud servers. Initial setup of your environment using this approach should take no more than half an hour; once your environment is set up, you should be able to whittle out scripts easily and, more importantly, make use of this nifty little test harness so that you avoid costly accidents!

Suggestions and constructive criticism are welcome; I’m particularly interested if you have seen better approaches to automation, or if you know any other nifty tools. I’d also be interested in finding out if anyone out there has real-world experience using pyrax with Jenkins or Bamboo, and/or integrated this type of scripting with WebDriver scripts.

In my next post, I’ll be discussing Puppet. Now that automating server deployments is no longer a secret to you, how do you get your machines to automatically download packages, set up software, properly configure the firewall et cetera? I’ll attempt to address this and more shortly.

Sharepoint auditing – a few thoughts

I have a colleague that’s been updating a sharepoint permissions matrix lately. It’s a good practice (I’d go as far as to say that it’s a must) to maintain such a matrix, in a format that is understandable to non-technical folk. It’s good for IT departments, who need to periodically check that people have access to the right information. It’s good for auditors, who want to show that their clients are exercising due diligence in controlling their resources. And it’s good for staff, who need to know which of their peers has access to the company’s knowledge, information, and tools.

However, while prepping for her cross-checking work, she’d been lead to believe that there are no tools for collecting all of a user’s  permissions on sites and lists. Since I’ve heard this discourse before, I thought I should debunk this myth and write about a tool that the Sharepoint integrator can add to his or her arsenal, Sushi.

It’s a great little utility, which you can obtain and customize to your heart’s content here if you know how to write code: http://sushi.codeplex.com/

Basically, you download the binary (no need to even build the project from source! I understand that this is sometimes daunting to people) and, in a few clicks, you can get a report on what groups your users are a part of and what specific permissions were granted to the user. You can also list which sites or lists in a site collection do not inherit permissions, which helps you identify what you need to specifically audit.

The catch: if you’re looking for a tool that does all your work for you, prepare to be disappointed. This does all the footwork for you, makes it such that you don’t have to repeatedly go through every site and list in your instance and hit “list permissions” and “site permissions”. It’s up to you to provide a matrix that is comprehensible and readable by your non-technical audience.

Previously, I’d been messing with scripts to extract the data right from the source: the SQL database. I started massaging them into a few SSRS reports, but ran out of time and motivation. I still have the script around, somewhere. Frankly though, with a tool like Sushi out there, I’d be inclined to think that one is better off hacking a bit of code to allow admins to select multiple users and export the results as a XML file, JSON file, or even to an SQL database. Once that’s done, the raw information can be easily formatted with a tool like Qlikview.

 

CISPA – do you know what it is?

This one’s a short one, but it’s important. It concerns not just our family, friends and colleagues in the United States, even though they will be the ones most affected. This is about CISPA, the latest in a series of bills to strip people of their right to privacy.

CISPA will affect you. No matter where & who you are. Do you know why it’s important, and how it will change your world?

Regardless of which side you’re on, you owe it to yourself to know what it’s about:

Let’s consider this quote by Edmund Burke: “The only thing necessary for the triumph of evil is for good men to do nothing.”
I love and hate this quote. Love it because it is a powerful, simple sentence that conveys a strong message. Hate it because it’s so damn accurate.

Truncating your SQL 2008 Database with a few lines of SQL…

Here’s a scenario you may be familiar with: you’ve got yourself a nice Sharepoint setup that you’ve gotten to run rather nicely. Conscientious admin that you are, you’ve set up a good, solid maintenance plan that checks your database health, backs up your database and transaction log… But all of a sudden, your backup drive fills up. Since everything has been hunky dory you only realize this during your next server check and by then, the transaction log’s grown to monstrous proportions! You clear up your backup drive and free up space, but you realize to your horror that your transaction log isn’t shrinking… Oh no!

If all of this is hitting home, you’ve probably already realized that the nifty little commands that used to work in SQL Server 2005 aren’t working on SQL Server 2008. So did I. Here’s my new trick for truncating your SQL 2008 database, hope it helps. I would highly recommend you read the whole article thoroughly before proceeding, it has information that you need to know before you do what you’re about to do.

Open up SQL Management studio, then open a query window to the database. For simplicity’s sake, I’ll assume your DB is called WSS_Content but if you’ve got multiple Content Databases / Site Collections (as well you should), the same applies with a different database name / log file name.

First, run this:

alter database WSS_Content set recovery simple with NO_WAIT
go
checkpoint
go
dbcc shinkfile(WSS_Content, 1)
go
alter database WSS_Content set recovery full with NO_WAIT

And get yourself some coffee. Lots of coffee — the bigger your transaction log is, the longer it will take. Run this during a weekend, or as soon as you can when there are as little people in your office as possible; do NOT abort the process, or you’ll regret it.

The above snippet of code switches your database from a full recovery model to a simple recovery model. The full recovery model makes thorough use of the transaction log; the simple does not. Before SQL Server actually makes any changes to its database, it stores the commands in the transaction log – this is so that if your server crashes it can continue to execute what it was doing when it crashed. This is what makes your SQL database so nice and robust: it is catalogging EVERYTHING it’s doing so that if something goes wrong it can retrace its steps.

I know what you’re thinking, and no. It’s not a good idea to keep your database in ‘simple’ mode, no matter how good your backups are. The rule of thumb is that if you have a production database that stores data of any relevance at all to you, you should be using the full recovery model, period. If your database is a ‘holding area’, if you’re just using it to perform computations and pass it off to another database, you can use a simple recovery model, maybe even run the database on a RAID-0 array so it’s nice and fast. Or if your database is written to only once a day, for instance if you are retrieving data from another site or the web and caching it locally, then backing it up immediately afterwards. Those are the only two examples I can think of where it makes sense to use a simple recovery model.

Now that you’ve executed the above code, the following code should be pretty fast:
backup log WSS_Content to disk = N'{some backup location of your choice}’
go
dbcc shrinkfile(‘WSS_Content_Log’, 1)

This is what actually shrinks the file. It makes a backup of the transaction log as SQL 2008 expects it and then shrinks the file. Of course, if you have enough space in your backup drive, you may wish to just execute this code – it’ll all depend on how big your transaction log has grown.

The Importance of prototyping

In my current job, one of my roles is to take people’s needs and turn them into software. I’ve helped develop and evolve the software engineering process in my company over the last ten years, and I find it interesting that of all of the documentation we produce, the most critical artifact tends to be the mockup (a.k.a. the ‘prototype’, or ‘storyboard’, depending on what industry you come from).

On the importance of mockups

Mockups are crucial to building intuitive, relevant software for your target audience, and here’s why: when you’re brought in to design software, you begin by exchanging ideas, needs, scopes and budgets with your client. You spend a good chunk of time doing nothing but talking about objectives, stakeholders, features, and risks – and by the end of your analysis, your client believes that you know everything that you need to know in order to build what he/she needs.

The reality is, of course, that it’s not so simple. By the end of your needs analysis, you’ll definitely know a lot more about the client’s work and needs than you went in – but there’s a difference between understanding the gist of a person’s work and being able to do that work. Although you have an appreciation for the complexity of that person’s job, your software’s intuitiveness and completeness will be limited by your high-level understanding of the process and information the job involves (unless you come from that domain, naturally). Conversely, your client may have conveyed her needs to you, but she has no means to gauge whether 1) you’ve understood them or 2) you’re able to translate them into something intuitive to use.

This is where prototyping comes in: now that you understand the concept, you can start to design the interface that will be used by the client to interact with the system. If the design is good, it will reflect the underlying business logic of your solution. The client will therefore be able to gauge your understanding of her domain and comment on whether your design is intuitive and practical… or not. This avoids the cost and frustration of having to re-design your designing software based on misunderstandings.

Convinced yet? If so, allow me to share the name of a tool I’ve been using very happily: Balsamiq. I’m not affiliated to the company in any way – frankly, I’m just that impressed with what they’ve implemented. Worth checking out at any rate.

Introducing Balsamiq

Balsamiq is not for designing graphical user interfaces, as one would with Dreamweaver, Eclipse or Visual Studio. It is specifically built to design mockups – there’s a difference. This is the kind of output you might expect from Balsamiq:

When people are shown realistic-looking interfaces, they tend to focus on fonts, colors and choice of image; and although these are certainly an important part of making the solution user-friendly and pleasant to use, those are easily changed later. When you show your prototype to clients, you want them giving you feedback on things like navigation, content and layout, because these are what will make the difference between something that will be used on a daily basis and something that will do the virtual equivalent of collecting dust in the back of the office.

It’s all about speed, speed, speed

The purpose of prototyping is to save time. You can create mockups with nothing more than a pencil and paper, so there should be a reason why you’re using prototyping software. I’ve found that with Balsamiq, I’m able to create prototypes at a fraction of the time it takes me to design the interface by hand (or using GUI designers like Dreamweaver). Not only that, but the interfaces are rich enough to be recognizable by the client – although you don’t want your prototypes to look too realistic, it’s no good if people spend most of their time trying to understand what it is you’ve drawn.

Compatibility

Balsamiq runs on most common operating systems. For those of you who use Ubuntu, you may be disappointed at first when you realize the Adobe’s dropped its support for AIR on Linux. Do not be discouraged! AIR, and therefore Balsamiq, can in fact be installed on Ubuntu 12.04 using these instructions: http://www.liberiangeek.net/2012/04/install-adobe-flash-reader-air-in-ubuntu-12-04-precise-pangolin/

Do note that Balsamiq also works very nicely as an application in Chrome and can be purchased from Google Marketplace. The benefit of using the Desktop version, however, is in the links you can create between markups: you can set up your mockups to point to other mockups and therefore make the presentation of your prototype more interactive.

Final words

If there’s anything you should take away from this article, it’s this: prototype. Your. Software. I’ve lost count of how many times I’ve presented a mockup to clients and they’ve said “I can tell you’ve understood; but this isn’t quite what I had in mind”. I consider this a happy problem – because the alternative is showing up after countless hours of development only to find out I’m going to have to scrap a lot of my work and start afresh. Prototyping is not only cost-effective because it mitigates risk of project failure due to silly misunderstandings, but it also reduces a lot of frustrations between you and your client.

Happy prototyping,

R.

Multiple Sharepoint List Synchronizations in Outlook via GPO

When setting up access to a few Sharepoint contact lists using GPO for a client, I realized that only one of the lists was being synchronized. The source of the problem is the fact that when assigning GPO’s that have the same setting, the GPO’s don’t append to each other — they overwrite each other.

This is a pain for sure, but after a few unsuccessful attempts, I realized that the problem I was facing was just one of perspective. Here’s a short article that will hopefully help you adjust your means of thinking about multiple sharepoint list synchronizations in Outlook via GPO.

A few examples to consider:

Example 1
GPO A is applied to the entire domain, GPO B is applied to the Sales OU. GPO A adds the Internal Contacts list, GPO B adds the Sales list.

What happens in the case of users in the Sales OU?

For people in the domain that aren’t in the Sales OU, then they get just the Internal Contacts list
For people in the Sales OU, the settings in GPO A get overridden by GPO B, so they only get the Sales list.

Example 2
Let’s say there is no Sales OU, but you still have GPO A and B.
If the link ID on A is lower than B, all users in the domain will have Internal Contact list
If the link ID on B is lower than A, all users in the domain will have Sales list

Your GPO Strategy:

My confusion was ultimately because of how I was thinking about the problem: GPO’s are feature-centric, not permission-centric. However, you’re applying them with users, groups and OU’s in mind, so it’s easy to slip into a mindset where you’re thinking of layering lists based on permissions.

Policy settings are overridden at the policy level, period. If you’re hoping to add a few rules to your domain computers’ firewall settings by setting up a policy with just those rules, assuming that they will get appended to everything you have set before, you are mistaken: though would definitely be the most effective and intuitive way to implement GPO, it just doesn’t work that way.

Coming back to our examples:

– You can use security settings for GPO to enforce which policies get applied to whom, which is especially useful in the case of Example 2.

– If you want certain people to see both lists, then you have to think about rewriting you GPO’s. For instance, in the case of the Sales OU it makes sense that sales people see both lists. Rewrite GPO B to have both the internal contacts and the sales entries. What’s important, here, is that you set the link order correctly. Right-click on the OU under which the GPO’s are applied (this can be the root) and move the order of your GPO’s around; make use of permissions where necessary.

Troubleshooting:

I highly recommend using the RSOP (Resulting Set of Policy: Planning) feature in AD Users and Computers. By right-clicking on a user and going to All Tasks > Resulting Set Of Policy: Planning, you can see what the user’s policy is going to look like. Furthermore, there is a nifty Precedence tab which shows all of the policies that are being applied and in what order – this was particularly useful to me because I was inadvertently applying my policies both at the domain and OU level and had forgotten to set the link order at the OU level – once I had removed my policies from the OU level, synchronization worked without a hitch.

Thoughts on the Amazon / Apple hack

Just thought I would share this harrowing tale of how Mat Honan basically got his info deleted off all his devices and personal e-mail accounts within hours:

http://www.wired.com/gadgetlab/2012/08/apple-amazon-mat-honan-hacking/all/

A few thoughts on this:
  • The guy basically got all his info wiped from his Mac and iPhone, using the same mechanisms to keep the devices safe from data theft.
  • The “entry point” here was the victim’s Amazon account; the attacker made his way from the victim’s amazon account into his .me account and from there to his gmail account. He wiped the Mac and iPhone, changed the .me password and gmail password, then hit the guy’s twitter account.
  • The target of this “Apple hack” that cleared out irreplaceable photos and files actually didn’t have anything to do with his his photos, files, or even work data. The real target of this attack? The guy’s twitter account. Why? ‘Because it looked cool.’
  • The attacker exploited two different “security philosophies” to gain access. In a nutshell, one company was using the last four digits of the victim’s credit card to secure his data; however, it’s a fairly common practice to show the last four digits of credit cards in order to identify the card without giving away the whole number.
The moral of the story? Think security no matter what or where your systems are. I realize how silly this sounds, and how daunting this can be. Ultimately, we all have insecure practices, especially in a day and age where the boundaries between technologies that are used for work and home are so blurry.
This story will make you think twice about relying on the cloud — but the reality of it is that it shouldn’t take you this long to start thinking about it. You may surmise that this could happen to you and stop using .me, gmail and the like… Don’t! Because you’re taking away the wrong message. It’s not because it’s on the cloud that it’s insecure. It’s because we tend to mistakingly rely on other companies to do the thinking for us.
Transparent Login to Sharepoint isn't as simple as one would assume...

Getting Office to transparently authenticate with a TMG-secured MOSS 2010

Transparent Login to Sharepoint isn't as simple as one would assume...Over the past few months, I’ve been working with a client on ramping up their existing Sharepoint installation. Although there’s still a lot of work to do, we’re starting to see light at the end of the tunnel: we’ve set  up a new production farm with a Sharepoint server, an SQL Server, and a Forefront TMG reverse proxy. The Sharepoint / SQL components run on a virtual environment for better scaling and (more importantly) snapshotting, and the TMG setup is starting to look very sweet indeed; we’d run across some performance hiccups but those now seem to be sorted.

If you’re a sysadmin, you may be able to appreciate the work involved in setting up, testing, and fool-proofing the above. However, you will probably also understand the fact that to end-users, this isn’t particularly interesting. In fact, if you place yourself in the mindset of a non-technical client, what you’ve effectively witnessed is absolutely no change to your (increasing expensive) system. “In fact,” you may muse, “I’m worse off than before.” Indeed, setting up a TMG does pose a few challenges to overcome: namely the fact that when you open up documents that are stored on Sharepoint, you now get prompted for a user name and password.

That particular issue was a hard sell to the client — and frankly, why shouldn’t it be? We IT folks keep talking about the benefits of storing documents on private clouds, recurring frustrations like being constantly prompted for a username and password are what prevent people from adopting cloud technology. So after some digging, I came up with the solution to getting Office to transparently authenticate with a TMG-secured MOSS installation.

I must admit that this post started out in a different format: it was logged as a trouble-ticket in our ITIL system. However, I spent so much time scouring the Net for simple, concise information on the topic that I think it’s worth re-mentioning. I will assume that you’re looking for some basic information on the topic and a few leads to more detailed articles.

What is SSO?

SSO stands for Single Sign-On. It allows users that have logged into a domain to re-use their credentials transparently for all subsites of the domain. This may sound trivial, but in this day and age when more and more enterprise services are accessible via the cloud it is a mission-critical feature. Consider this: if your exchange inbox is at mail.yourdomain.com and your sharepoint is at sharepoint.yourdomain.com, SSO is what allows you to authenticate once and access both resources without having to re-enter your creds.

SSO doesn’t just apply to websites, but also technologies. In my case, I had gotten feedback from a client that she was getting annoyed at constantly being prompted for her password when opening documents from Sharepoint; SSO is used to transparently authenticate your users, saving them a bit of time and typing.

For more information on SSO with TMG, please consult the following link: http://technet.microsoft.com/en-us/library/cc995112

The configuration, server-side:

– You need to be using Forms-Based Authentication; this is set up from your web listener’s properties, and frankly, this provides the most consistent, secure, interop-compliant end-user experience.
– The SSO features need to be enabled (this is also on the listener’s side). Make sure to specify the domains for which you want to use SSO.
– Enable persistent cookies: you do this by editing the properties of your listener, going to the Forms tab, and hitting Advanced. There, you’ll have a section for cookies. You needn’t enter a name for your cookie, but set the “Use Persistent Cookies” dropdown to “only on private computers”.

The configuration, client-side:

– Add the sharepoint portal to either your trusted sites or your intranet zone. This can be done either by using GPO or by running this manually on all computers that should use SSO:

  • Open Internet Explorer
  • Open Internet Options
  • Navigate to the Security tab
  • Click on the zone
  • Click on Sites
  • Add your site

Note that if you go the GPO way, your domain users will no longer be able to control their sites’ zones. Although this may sound reasonable if you’re a sysadmin, please do remember that your end-users may think differently depending on their corporate culture.

– Protected Mode needs to be disabled for the zone in which you’ve put the portal. This is done in Internet Explorer > Internet Options > Security tab > click on the zone and untick the “protected mode” checkbox.

A few additional notes:

– You could use naked IIS over TMG; however, this is not advised. Naked IIS exposes your system directly to security threats, reduces your monitoring capability, and disallows you from providing a reverse cache on your pages (which reduces performance and, in the case of public sharepoints, searchability).
– Use of persistent cookies can be dangerous, namely because people can easily steal and re-use them. This is why it is highly recommended to enable persistent cookies for private computers only.

Finally, I’d like to thank Dinko Fabricini for his easy-to-follow post. To be perfectly honest, his post is a much better how-to than this article. As mentioned earlier, this is an adaptation of a trouble-ticket tech note I logged for my company which I thought would be useful to others. If you’re interested in setting up SSO for your MOSS, I would highly recommend you check out his original post.
http://www.itsolutionbraindumps.com/2011/01/multiple-authentication-prompts-when.html

Addendum: here’s a short and straightforward vlog post about the benefits of using TMG over naked IIS… Useful points to consider when having conversations with the sysadmin team! http://www.youtube.com/watch?v=PnKCZctn8TM