Mandatory BYOD

BYOD? PEBKAC? Up to you to decide

Image CC license from openclipart.org user Improulx

I was reading a ZDNet article this morning which observed that more and more executives think about making BYOD mandatory. I know that some of my clients are thinking about it, and if you read the article it makes sense, in a certain way – it’s less expensive to companies if users come in with their own smartphones and computers; users tend to take better care of their own equipment than company-provided equipment and as a result, IT departments may benefit from less solicitation from their users.

I’d like to point out two things, however. First, this statement:

“Companies and agencies are recognizing that individual employees are doing a better job of handling and managing their devices than their harried and overworked IT departments.”

I’m sorry… What?  I’ve done IT support for home users and one thing I can tell you for sure is that most people haven’t the slightest clue about handling and managing their devices safely. Most home computers I’ve worked with have had at least one of the following:

  • Had Antivirus and firewall software that’s not up-to-date, mostly due to the fact that users have purchased a subscription license and not realized it had to be renewed
  • Been exposed to malware which has been left unchecked. If the user is a fan of illegally downloading software, music, or video of any variety, the exposure is of course much greater. If the household has family, especially children aged 10 or older, exposure is almost a certainty
  • Is slow or not functional. This boils down to two things: either the machine is completely overloaded with software (crapware, trials and other programs) or the machine is well over its due date — which means that if the hardware dies, replacing the equipment will be nigh impossible and the chances of recuperating data are slim.

Second, I’d like you to consider the findings of the Verizon data breach report. These stipulate that 10% of all data breaches they’ve dealt with are physical, and 5% of all data breaches are due to misuse (read: disgruntled employee, abuse of privileges, etc…)*. Can you think of a better environment for data ex-filtration than a BYOD environment?

*: They do go on to indicate that less than 1% of the compromised data comes from misuse. What does this mean? Large companies like LinkedIn, Facebook or Sony PSN have lot of information; when their data is breached, it’s typically by the millions. That skews the figures because in comparison the secret formulation of your latest cancer drug is fairly small — but the value is easily comparable.

BYOD is coming, make no mistake about that. However, my take on this is that as IT policy-makers it is up to us to set the pace and the guidelines for such endeavors; it’s not good enough to throw up our arms and say that it’s happening anyway. We need to find efficient compromises and, while policy and security is catching up with technical innovation, make sure no one gets hurt.

Happy policy-making,

R.

A nifty little script for splicing pages into your PDF’s

Image CC license from openclipart.org user warszawianka

Today, I was in dire need of inserting a page into a PDF without having to regenerate the entire document. I know there are products out there that do that, even cloud services such as foxyutils, but the PDF’s are kind of sensitive and I didn’t want to go through the whole dance of purchasing commercial software. So I built my own basic script in bash, using pdftk to manipulate the files.

Very, very simple use. Let’s assume the following:

  • The name of the script below is splicer.sh and it is marked as executable
  • The original file is original.pdf
  • The additional page is addendum.pdf
  • You’re placing the contents of addendum after the first page
  • The final file is final.pdf

Then this is how you would call the script from your linux command line:
./splicer.sh original.pdf addendum.pdf 1 final.pdf

Easy, right? So without further ado, here is the script for splicing pages into your PDF’s:

#!/bin/bash
export original=$1
export addendum=$2
export splice_at=$3
part_two=$(expr $splice_at + 1)
export result=$4

pdftk $original cat 1-$splice_at output tmp_$result
pdftk $original cat $part_two-end output tmp2_$result
pdftk tmp_$result $addendum tmp2_$result cat output $result

rm tmp_$result tmp2_$result

Displaying your Sharepoint taxonomy in a visual web part for easy reference.

Image CC license from Mingle2 (http://mingle2.com/blog/view/web-developer-mind)

I was reading a post by Tobias Zimmergren on how to render your sharepoint taxonomy in a Visual Web Part (great article, thanks Tobias!) and decided to try it out for a client Sharepoint; it worked rather well for small taxonomies, but I started seeing a few issues when working with larger taxonomies that had multiple levels of terms. Rather than scrap the whole thing, I figured that I would re-write it a bit and see what happened.

The resulting code worked great for me, so I’m publishing it in hopes that someone keeps improving on Tobias’ original work. The reason why I had sought out something like this is because one often has to re-work taxonomies and the de-facto Term Manager doesn’t output the taxonomy in a very printer-friendly format — this definitely helps!

To use it, open up Visual Studio and create a new MOSS 2010 Visual Web Part project. Your Sharepoint Visual Web Part will have a TreeView control on it called tvMetaDataTree; its code will look something like this:

using System;

using System.Web.UI;

using System.Web.UI.WebControls;

using System.Web.UI.WebControls.WebParts;

using Microsoft.SharePoint;

using Microsoft.SharePoint.Taxonomy;

namespace DisplayTaxonomy.VisualWebPart1{

public partial class VisualWebPart1UserControl : UserControl {

protected void Page_Load(object sender, EventArgs e) {

TaxonomyToTreeSet taxonomyHelper = new TaxonomyToTreeSet();

tvMetadataTree.Nodes.Add(taxonomyHelper.getTreeTaxonomy());

}

}

}

Your taxonomy-to-treeview class will be a bit heftier; I’ve implemented it using recursive programming because to me it seems simpler and more efficient (though more resource-consuming)

 

using System;

using System.Web.UI;

using System.Web.UI.WebControls;

using System.Web.UI.WebControls.WebParts;

using Microsoft.SharePoint;

using Microsoft.SharePoint.Taxonomy;

using System.Collections;

namespace DisplayTaxonomy

{

class TaxonomyToTreeSet

{

TaxonomySession _TaxonomySession;

TreeNode _RootNode;

public TaxonomyToTreeSet()

{

SPSite thisSite = SPContext.Current.Site;

_TaxonomySession = new TaxonomySession(thisSite);

_RootNode = new TreeNode();

_RootNode.Text = “Site Taxonomy”;

getTermStores(_TaxonomySession, ref _RootNode);

}

public TreeNode getTreeTaxonomy()

{

return _RootNode;

}

private void getTermStores(TaxonomySession session, ref TreeNode parent)

{

foreach (TermStore ts in session.TermStores)

{

TreeNode node = new TreeNode(ts.Name, null, null, “”, null);

getGroups(ts, ref node);

parent.ChildNodes.Add(node);

}

}

private void getGroups(TermStore ts, ref TreeNode parent)

{

foreach (Group g in ts.Groups)

{

TreeNode node = new TreeNode(g.Name, null, null, “”, null);

getTermSets(g, ref node);

parent.ChildNodes.Add(node);

}

}

private void getTermSets(Group g, ref TreeNode parent)

{

foreach (TermSet tset in g.TermSets)

{

TreeNode node = new TreeNode(tset.Name, null, null, “”, null);

getTerms(tset, ref node);

parent.ChildNodes.Add(node);

}

}

private void getTerms(object term, ref TreeNode parent)

{

if(term.GetType() == typeof(Term))

{

foreach (Term t in ((Term)term).Terms)

{

TreeNode node = new TreeNode(t.Name, null, null, “”, null);

getTerms(t, ref node);

parent.ChildNodes.Add(node);

}

}

if (term.GetType() == typeof(TermSet))

{

foreach (Term t in ((TermSet)term).Terms)

{

TreeNode node = new TreeNode(t.Name, null, null, “”, null);

getTerms(t, ref node);

parent.ChildNodes.Add(node);

}

}

}

}

}

That pretty much covers it, really. I hope that you find this useful. Don’t hesitate to share! If you would rather see a video demo of how to implement this drop me a comment and I’ll set that up.

Happy coding,

R.

Stay on top of your budgets with Toggl alerts

Image courtesy of Cam Hoff, worksonpaper.ca (http://design.org/blog/infographic-time-keeping)

This week, the lovely KRED of Research Salad blogged about a cloud time-tracking service we use called Toggl. This nifty tool allows you to keep track of your project hours for multiple clients, with multiple rates via the web, a desktop app, or a smartphone app. It has a clean, simple interface and it makes it easy for you to have an overview of your team’s hours per project, or even export your hours to a billing service of your choice for easy reconciliation. Of the time-tracking systems I’ve used, it’s the least painful by far.

Today, they’ve released yet another feature that resolves one of the few peeves I have about time-tracking: it’s called workspace alarms and what it does is notify you when your project begins to reach your budgetted time allocation. This is practical when you’re doing contract work and need to keep on top of how much time you spend on a project; if you’re a librarian like KRED and you’re not tracking your time for a client but rather for yourself, this is useful for keeping tabs on the amount of time you’ve set aside for a type of activity. Finally, as a project manager, it’s always good to know how accurate your estimate is — alerts are a good way to keep you in check.

Happy time-tracking,

R.

Browser compatibility

Achieving good browser compatibility, or: How does your page look on all web browsers?

Browser compatibility

Image CC license from Flickr user sixsteps

If you’ve ever designed a website, it’s likely that you’ll be familiar with the frustration of browser compatibility. Yes, things like HTML and CSS are supposed to be standards and yet, every browser has its own way of rendering its pages.  Annoying, I know.

But perhaps this can turn your frown upside-down: Adobe has got a complimentary (read free) tool for checking your website on all browsers called BrowserLab. Better yet, it’s a web application so you can check how your page looks in IE without having to use Windows; it shows you how your page can look in several browsers and even several versions of these browsers, on OSX and Windows.

This is cool. Very cool, even.

One thing you may wish to do regardless of whether your page looks good in BrowserLab is test your HTML’s validity with the W3C markup validator. It’ll help catch anything incompatible with the standard; though you’ll quickly realize you can’t fix all of the issues without breaking your site’s functionality, it is helpful to know what may and may not work on all browsers (at least, from W3C’s point of view).

Happy coding,

Red.