Truncating your SQL 2008 Database with a few lines of SQL…

Here’s a scenario you may be familiar with: you’ve got yourself a nice Sharepoint setup that you’ve gotten to run rather nicely. Conscientious admin that you are, you’ve set up a good, solid maintenance plan that checks your database health, backs up your database and transaction log… But all of a sudden, your backup drive fills up. Since everything has been hunky dory you only realize this during your next server check and by then, the transaction log’s grown to monstrous proportions! You clear up your backup drive and free up space, but you realize to your horror that your transaction log isn’t shrinking… Oh no!

If all of this is hitting home, you’ve probably already realized that the nifty little commands that used to work in SQL Server 2005 aren’t working on SQL Server 2008. So did I. Here’s my new trick for truncating your SQL 2008 database, hope it helps. I would highly recommend you read the whole article thoroughly before proceeding, it has information that you need to know before you do what you’re about to do.

Open up SQL Management studio, then open a query window to the database. For simplicity’s sake, I’ll assume your DB is called WSS_Content but if you’ve got multiple Content Databases / Site Collections (as well you should), the same applies with a different database name / log file name.

First, run this:

alter database WSS_Content set recovery simple with NO_WAIT
go
checkpoint
go
dbcc shinkfile(WSS_Content, 1)
go
alter database WSS_Content set recovery full with NO_WAIT

And get yourself some coffee. Lots of coffee — the bigger your transaction log is, the longer it will take. Run this during a weekend, or as soon as you can when there are as little people in your office as possible; do NOT abort the process, or you’ll regret it.

The above snippet of code switches your database from a full recovery model to a simple recovery model. The full recovery model makes thorough use of the transaction log; the simple does not. Before SQL Server actually makes any changes to its database, it stores the commands in the transaction log – this is so that if your server crashes it can continue to execute what it was doing when it crashed. This is what makes your SQL database so nice and robust: it is catalogging EVERYTHING it’s doing so that if something goes wrong it can retrace its steps.

I know what you’re thinking, and no. It’s not a good idea to keep your database in ‘simple’ mode, no matter how good your backups are. The rule of thumb is that if you have a production database that stores data of any relevance at all to you, you should be using the full recovery model, period. If your database is a ‘holding area’, if you’re just using it to perform computations and pass it off to another database, you can use a simple recovery model, maybe even run the database on a RAID-0 array so it’s nice and fast. Or if your database is written to only once a day, for instance if you are retrieving data from another site or the web and caching it locally, then backing it up immediately afterwards. Those are the only two examples I can think of where it makes sense to use a simple recovery model.

Now that you’ve executed the above code, the following code should be pretty fast:
backup log WSS_Content to disk = N'{some backup location of your choice}’
go
dbcc shrinkfile(‘WSS_Content_Log’, 1)

This is what actually shrinks the file. It makes a backup of the transaction log as SQL 2008 expects it and then shrinks the file. Of course, if you have enough space in your backup drive, you may wish to just execute this code – it’ll all depend on how big your transaction log has grown.

The Importance of prototyping

In my current job, one of my roles is to take people’s needs and turn them into software. I’ve helped develop and evolve the software engineering process in my company over the last ten years, and I find it interesting that of all of the documentation we produce, the most critical artifact tends to be the mockup (a.k.a. the ‘prototype’, or ‘storyboard’, depending on what industry you come from).

On the importance of mockups

Mockups are crucial to building intuitive, relevant software for your target audience, and here’s why: when you’re brought in to design software, you begin by exchanging ideas, needs, scopes and budgets with your client. You spend a good chunk of time doing nothing but talking about objectives, stakeholders, features, and risks – and by the end of your analysis, your client believes that you know everything that you need to know in order to build what he/she needs.

The reality is, of course, that it’s not so simple. By the end of your needs analysis, you’ll definitely know a lot more about the client’s work and needs than you went in – but there’s a difference between understanding the gist of a person’s work and being able to do that work. Although you have an appreciation for the complexity of that person’s job, your software’s intuitiveness and completeness will be limited by your high-level understanding of the process and information the job involves (unless you come from that domain, naturally). Conversely, your client may have conveyed her needs to you, but she has no means to gauge whether 1) you’ve understood them or 2) you’re able to translate them into something intuitive to use.

This is where prototyping comes in: now that you understand the concept, you can start to design the interface that will be used by the client to interact with the system. If the design is good, it will reflect the underlying business logic of your solution. The client will therefore be able to gauge your understanding of her domain and comment on whether your design is intuitive and practical… or not. This avoids the cost and frustration of having to re-design your designing software based on misunderstandings.

Convinced yet? If so, allow me to share the name of a tool I’ve been using very happily: Balsamiq. I’m not affiliated to the company in any way – frankly, I’m just that impressed with what they’ve implemented. Worth checking out at any rate.

Introducing Balsamiq

Balsamiq is not for designing graphical user interfaces, as one would with Dreamweaver, Eclipse or Visual Studio. It is specifically built to design mockups – there’s a difference. This is the kind of output you might expect from Balsamiq:

When people are shown realistic-looking interfaces, they tend to focus on fonts, colors and choice of image; and although these are certainly an important part of making the solution user-friendly and pleasant to use, those are easily changed later. When you show your prototype to clients, you want them giving you feedback on things like navigation, content and layout, because these are what will make the difference between something that will be used on a daily basis and something that will do the virtual equivalent of collecting dust in the back of the office.

It’s all about speed, speed, speed

The purpose of prototyping is to save time. You can create mockups with nothing more than a pencil and paper, so there should be a reason why you’re using prototyping software. I’ve found that with Balsamiq, I’m able to create prototypes at a fraction of the time it takes me to design the interface by hand (or using GUI designers like Dreamweaver). Not only that, but the interfaces are rich enough to be recognizable by the client – although you don’t want your prototypes to look too realistic, it’s no good if people spend most of their time trying to understand what it is you’ve drawn.

Compatibility

Balsamiq runs on most common operating systems. For those of you who use Ubuntu, you may be disappointed at first when you realize the Adobe’s dropped its support for AIR on Linux. Do not be discouraged! AIR, and therefore Balsamiq, can in fact be installed on Ubuntu 12.04 using these instructions: http://www.liberiangeek.net/2012/04/install-adobe-flash-reader-air-in-ubuntu-12-04-precise-pangolin/

Do note that Balsamiq also works very nicely as an application in Chrome and can be purchased from Google Marketplace. The benefit of using the Desktop version, however, is in the links you can create between markups: you can set up your mockups to point to other mockups and therefore make the presentation of your prototype more interactive.

Final words

If there’s anything you should take away from this article, it’s this: prototype. Your. Software. I’ve lost count of how many times I’ve presented a mockup to clients and they’ve said “I can tell you’ve understood; but this isn’t quite what I had in mind”. I consider this a happy problem – because the alternative is showing up after countless hours of development only to find out I’m going to have to scrap a lot of my work and start afresh. Prototyping is not only cost-effective because it mitigates risk of project failure due to silly misunderstandings, but it also reduces a lot of frustrations between you and your client.

Happy prototyping,

R.

Multiple Sharepoint List Synchronizations in Outlook via GPO

When setting up access to a few Sharepoint contact lists using GPO for a client, I realized that only one of the lists was being synchronized. The source of the problem is the fact that when assigning GPO’s that have the same setting, the GPO’s don’t append to each other — they overwrite each other.

This is a pain for sure, but after a few unsuccessful attempts, I realized that the problem I was facing was just one of perspective. Here’s a short article that will hopefully help you adjust your means of thinking about multiple sharepoint list synchronizations in Outlook via GPO.

A few examples to consider:

Example 1
GPO A is applied to the entire domain, GPO B is applied to the Sales OU. GPO A adds the Internal Contacts list, GPO B adds the Sales list.

What happens in the case of users in the Sales OU?

For people in the domain that aren’t in the Sales OU, then they get just the Internal Contacts list
For people in the Sales OU, the settings in GPO A get overridden by GPO B, so they only get the Sales list.

Example 2
Let’s say there is no Sales OU, but you still have GPO A and B.
If the link ID on A is lower than B, all users in the domain will have Internal Contact list
If the link ID on B is lower than A, all users in the domain will have Sales list

Your GPO Strategy:

My confusion was ultimately because of how I was thinking about the problem: GPO’s are feature-centric, not permission-centric. However, you’re applying them with users, groups and OU’s in mind, so it’s easy to slip into a mindset where you’re thinking of layering lists based on permissions.

Policy settings are overridden at the policy level, period. If you’re hoping to add a few rules to your domain computers’ firewall settings by setting up a policy with just those rules, assuming that they will get appended to everything you have set before, you are mistaken: though would definitely be the most effective and intuitive way to implement GPO, it just doesn’t work that way.

Coming back to our examples:

– You can use security settings for GPO to enforce which policies get applied to whom, which is especially useful in the case of Example 2.

– If you want certain people to see both lists, then you have to think about rewriting you GPO’s. For instance, in the case of the Sales OU it makes sense that sales people see both lists. Rewrite GPO B to have both the internal contacts and the sales entries. What’s important, here, is that you set the link order correctly. Right-click on the OU under which the GPO’s are applied (this can be the root) and move the order of your GPO’s around; make use of permissions where necessary.

Troubleshooting:

I highly recommend using the RSOP (Resulting Set of Policy: Planning) feature in AD Users and Computers. By right-clicking on a user and going to All Tasks > Resulting Set Of Policy: Planning, you can see what the user’s policy is going to look like. Furthermore, there is a nifty Precedence tab which shows all of the policies that are being applied and in what order – this was particularly useful to me because I was inadvertently applying my policies both at the domain and OU level and had forgotten to set the link order at the OU level – once I had removed my policies from the OU level, synchronization worked without a hitch.

Thoughts on the Amazon / Apple hack

Just thought I would share this harrowing tale of how Mat Honan basically got his info deleted off all his devices and personal e-mail accounts within hours:

http://www.wired.com/gadgetlab/2012/08/apple-amazon-mat-honan-hacking/all/

A few thoughts on this:
  • The guy basically got all his info wiped from his Mac and iPhone, using the same mechanisms to keep the devices safe from data theft.
  • The “entry point” here was the victim’s Amazon account; the attacker made his way from the victim’s amazon account into his .me account and from there to his gmail account. He wiped the Mac and iPhone, changed the .me password and gmail password, then hit the guy’s twitter account.
  • The target of this “Apple hack” that cleared out irreplaceable photos and files actually didn’t have anything to do with his his photos, files, or even work data. The real target of this attack? The guy’s twitter account. Why? ‘Because it looked cool.’
  • The attacker exploited two different “security philosophies” to gain access. In a nutshell, one company was using the last four digits of the victim’s credit card to secure his data; however, it’s a fairly common practice to show the last four digits of credit cards in order to identify the card without giving away the whole number.
The moral of the story? Think security no matter what or where your systems are. I realize how silly this sounds, and how daunting this can be. Ultimately, we all have insecure practices, especially in a day and age where the boundaries between technologies that are used for work and home are so blurry.
This story will make you think twice about relying on the cloud — but the reality of it is that it shouldn’t take you this long to start thinking about it. You may surmise that this could happen to you and stop using .me, gmail and the like… Don’t! Because you’re taking away the wrong message. It’s not because it’s on the cloud that it’s insecure. It’s because we tend to mistakingly rely on other companies to do the thinking for us.
Transparent Login to Sharepoint isn't as simple as one would assume...

Getting Office to transparently authenticate with a TMG-secured MOSS 2010

Transparent Login to Sharepoint isn't as simple as one would assume...Over the past few months, I’ve been working with a client on ramping up their existing Sharepoint installation. Although there’s still a lot of work to do, we’re starting to see light at the end of the tunnel: we’ve set  up a new production farm with a Sharepoint server, an SQL Server, and a Forefront TMG reverse proxy. The Sharepoint / SQL components run on a virtual environment for better scaling and (more importantly) snapshotting, and the TMG setup is starting to look very sweet indeed; we’d run across some performance hiccups but those now seem to be sorted.

If you’re a sysadmin, you may be able to appreciate the work involved in setting up, testing, and fool-proofing the above. However, you will probably also understand the fact that to end-users, this isn’t particularly interesting. In fact, if you place yourself in the mindset of a non-technical client, what you’ve effectively witnessed is absolutely no change to your (increasing expensive) system. “In fact,” you may muse, “I’m worse off than before.” Indeed, setting up a TMG does pose a few challenges to overcome: namely the fact that when you open up documents that are stored on Sharepoint, you now get prompted for a user name and password.

That particular issue was a hard sell to the client — and frankly, why shouldn’t it be? We IT folks keep talking about the benefits of storing documents on private clouds, recurring frustrations like being constantly prompted for a username and password are what prevent people from adopting cloud technology. So after some digging, I came up with the solution to getting Office to transparently authenticate with a TMG-secured MOSS installation.

I must admit that this post started out in a different format: it was logged as a trouble-ticket in our ITIL system. However, I spent so much time scouring the Net for simple, concise information on the topic that I think it’s worth re-mentioning. I will assume that you’re looking for some basic information on the topic and a few leads to more detailed articles.

What is SSO?

SSO stands for Single Sign-On. It allows users that have logged into a domain to re-use their credentials transparently for all subsites of the domain. This may sound trivial, but in this day and age when more and more enterprise services are accessible via the cloud it is a mission-critical feature. Consider this: if your exchange inbox is at mail.yourdomain.com and your sharepoint is at sharepoint.yourdomain.com, SSO is what allows you to authenticate once and access both resources without having to re-enter your creds.

SSO doesn’t just apply to websites, but also technologies. In my case, I had gotten feedback from a client that she was getting annoyed at constantly being prompted for her password when opening documents from Sharepoint; SSO is used to transparently authenticate your users, saving them a bit of time and typing.

For more information on SSO with TMG, please consult the following link: http://technet.microsoft.com/en-us/library/cc995112

The configuration, server-side:

– You need to be using Forms-Based Authentication; this is set up from your web listener’s properties, and frankly, this provides the most consistent, secure, interop-compliant end-user experience.
– The SSO features need to be enabled (this is also on the listener’s side). Make sure to specify the domains for which you want to use SSO.
– Enable persistent cookies: you do this by editing the properties of your listener, going to the Forms tab, and hitting Advanced. There, you’ll have a section for cookies. You needn’t enter a name for your cookie, but set the “Use Persistent Cookies” dropdown to “only on private computers”.

The configuration, client-side:

– Add the sharepoint portal to either your trusted sites or your intranet zone. This can be done either by using GPO or by running this manually on all computers that should use SSO:

  • Open Internet Explorer
  • Open Internet Options
  • Navigate to the Security tab
  • Click on the zone
  • Click on Sites
  • Add your site

Note that if you go the GPO way, your domain users will no longer be able to control their sites’ zones. Although this may sound reasonable if you’re a sysadmin, please do remember that your end-users may think differently depending on their corporate culture.

– Protected Mode needs to be disabled for the zone in which you’ve put the portal. This is done in Internet Explorer > Internet Options > Security tab > click on the zone and untick the “protected mode” checkbox.

A few additional notes:

– You could use naked IIS over TMG; however, this is not advised. Naked IIS exposes your system directly to security threats, reduces your monitoring capability, and disallows you from providing a reverse cache on your pages (which reduces performance and, in the case of public sharepoints, searchability).
– Use of persistent cookies can be dangerous, namely because people can easily steal and re-use them. This is why it is highly recommended to enable persistent cookies for private computers only.

Finally, I’d like to thank Dinko Fabricini for his easy-to-follow post. To be perfectly honest, his post is a much better how-to than this article. As mentioned earlier, this is an adaptation of a trouble-ticket tech note I logged for my company which I thought would be useful to others. If you’re interested in setting up SSO for your MOSS, I would highly recommend you check out his original post.
http://www.itsolutionbraindumps.com/2011/01/multiple-authentication-prompts-when.html

Addendum: here’s a short and straightforward vlog post about the benefits of using TMG over naked IIS… Useful points to consider when having conversations with the sysadmin team! http://www.youtube.com/watch?v=PnKCZctn8TM

Mandatory BYOD

BYOD? PEBKAC? Up to you to decide

Image CC license from openclipart.org user Improulx

I was reading a ZDNet article this morning which observed that more and more executives think about making BYOD mandatory. I know that some of my clients are thinking about it, and if you read the article it makes sense, in a certain way – it’s less expensive to companies if users come in with their own smartphones and computers; users tend to take better care of their own equipment than company-provided equipment and as a result, IT departments may benefit from less solicitation from their users.

I’d like to point out two things, however. First, this statement:

“Companies and agencies are recognizing that individual employees are doing a better job of handling and managing their devices than their harried and overworked IT departments.”

I’m sorry… What?  I’ve done IT support for home users and one thing I can tell you for sure is that most people haven’t the slightest clue about handling and managing their devices safely. Most home computers I’ve worked with have had at least one of the following:

  • Had Antivirus and firewall software that’s not up-to-date, mostly due to the fact that users have purchased a subscription license and not realized it had to be renewed
  • Been exposed to malware which has been left unchecked. If the user is a fan of illegally downloading software, music, or video of any variety, the exposure is of course much greater. If the household has family, especially children aged 10 or older, exposure is almost a certainty
  • Is slow or not functional. This boils down to two things: either the machine is completely overloaded with software (crapware, trials and other programs) or the machine is well over its due date — which means that if the hardware dies, replacing the equipment will be nigh impossible and the chances of recuperating data are slim.

Second, I’d like you to consider the findings of the Verizon data breach report. These stipulate that 10% of all data breaches they’ve dealt with are physical, and 5% of all data breaches are due to misuse (read: disgruntled employee, abuse of privileges, etc…)*. Can you think of a better environment for data ex-filtration than a BYOD environment?

*: They do go on to indicate that less than 1% of the compromised data comes from misuse. What does this mean? Large companies like LinkedIn, Facebook or Sony PSN have lot of information; when their data is breached, it’s typically by the millions. That skews the figures because in comparison the secret formulation of your latest cancer drug is fairly small — but the value is easily comparable.

BYOD is coming, make no mistake about that. However, my take on this is that as IT policy-makers it is up to us to set the pace and the guidelines for such endeavors; it’s not good enough to throw up our arms and say that it’s happening anyway. We need to find efficient compromises and, while policy and security is catching up with technical innovation, make sure no one gets hurt.

Happy policy-making,

R.

A nifty little script for splicing pages into your PDF’s

Image CC license from openclipart.org user warszawianka

Today, I was in dire need of inserting a page into a PDF without having to regenerate the entire document. I know there are products out there that do that, even cloud services such as foxyutils, but the PDF’s are kind of sensitive and I didn’t want to go through the whole dance of purchasing commercial software. So I built my own basic script in bash, using pdftk to manipulate the files.

Very, very simple use. Let’s assume the following:

  • The name of the script below is splicer.sh and it is marked as executable
  • The original file is original.pdf
  • The additional page is addendum.pdf
  • You’re placing the contents of addendum after the first page
  • The final file is final.pdf

Then this is how you would call the script from your linux command line:
./splicer.sh original.pdf addendum.pdf 1 final.pdf

Easy, right? So without further ado, here is the script for splicing pages into your PDF’s:

#!/bin/bash
export original=$1
export addendum=$2
export splice_at=$3
part_two=$(expr $splice_at + 1)
export result=$4

pdftk $original cat 1-$splice_at output tmp_$result
pdftk $original cat $part_two-end output tmp2_$result
pdftk tmp_$result $addendum tmp2_$result cat output $result

rm tmp_$result tmp2_$result

Displaying your Sharepoint taxonomy in a visual web part for easy reference.

Image CC license from Mingle2 (http://mingle2.com/blog/view/web-developer-mind)

I was reading a post by Tobias Zimmergren on how to render your sharepoint taxonomy in a Visual Web Part (great article, thanks Tobias!) and decided to try it out for a client Sharepoint; it worked rather well for small taxonomies, but I started seeing a few issues when working with larger taxonomies that had multiple levels of terms. Rather than scrap the whole thing, I figured that I would re-write it a bit and see what happened.

The resulting code worked great for me, so I’m publishing it in hopes that someone keeps improving on Tobias’ original work. The reason why I had sought out something like this is because one often has to re-work taxonomies and the de-facto Term Manager doesn’t output the taxonomy in a very printer-friendly format — this definitely helps!

To use it, open up Visual Studio and create a new MOSS 2010 Visual Web Part project. Your Sharepoint Visual Web Part will have a TreeView control on it called tvMetaDataTree; its code will look something like this:

using System;

using System.Web.UI;

using System.Web.UI.WebControls;

using System.Web.UI.WebControls.WebParts;

using Microsoft.SharePoint;

using Microsoft.SharePoint.Taxonomy;

namespace DisplayTaxonomy.VisualWebPart1{

public partial class VisualWebPart1UserControl : UserControl {

protected void Page_Load(object sender, EventArgs e) {

TaxonomyToTreeSet taxonomyHelper = new TaxonomyToTreeSet();

tvMetadataTree.Nodes.Add(taxonomyHelper.getTreeTaxonomy());

}

}

}

Your taxonomy-to-treeview class will be a bit heftier; I’ve implemented it using recursive programming because to me it seems simpler and more efficient (though more resource-consuming)

 

using System;

using System.Web.UI;

using System.Web.UI.WebControls;

using System.Web.UI.WebControls.WebParts;

using Microsoft.SharePoint;

using Microsoft.SharePoint.Taxonomy;

using System.Collections;

namespace DisplayTaxonomy

{

class TaxonomyToTreeSet

{

TaxonomySession _TaxonomySession;

TreeNode _RootNode;

public TaxonomyToTreeSet()

{

SPSite thisSite = SPContext.Current.Site;

_TaxonomySession = new TaxonomySession(thisSite);

_RootNode = new TreeNode();

_RootNode.Text = “Site Taxonomy”;

getTermStores(_TaxonomySession, ref _RootNode);

}

public TreeNode getTreeTaxonomy()

{

return _RootNode;

}

private void getTermStores(TaxonomySession session, ref TreeNode parent)

{

foreach (TermStore ts in session.TermStores)

{

TreeNode node = new TreeNode(ts.Name, null, null, “”, null);

getGroups(ts, ref node);

parent.ChildNodes.Add(node);

}

}

private void getGroups(TermStore ts, ref TreeNode parent)

{

foreach (Group g in ts.Groups)

{

TreeNode node = new TreeNode(g.Name, null, null, “”, null);

getTermSets(g, ref node);

parent.ChildNodes.Add(node);

}

}

private void getTermSets(Group g, ref TreeNode parent)

{

foreach (TermSet tset in g.TermSets)

{

TreeNode node = new TreeNode(tset.Name, null, null, “”, null);

getTerms(tset, ref node);

parent.ChildNodes.Add(node);

}

}

private void getTerms(object term, ref TreeNode parent)

{

if(term.GetType() == typeof(Term))

{

foreach (Term t in ((Term)term).Terms)

{

TreeNode node = new TreeNode(t.Name, null, null, “”, null);

getTerms(t, ref node);

parent.ChildNodes.Add(node);

}

}

if (term.GetType() == typeof(TermSet))

{

foreach (Term t in ((TermSet)term).Terms)

{

TreeNode node = new TreeNode(t.Name, null, null, “”, null);

getTerms(t, ref node);

parent.ChildNodes.Add(node);

}

}

}

}

}

That pretty much covers it, really. I hope that you find this useful. Don’t hesitate to share! If you would rather see a video demo of how to implement this drop me a comment and I’ll set that up.

Happy coding,

R.

Stay on top of your budgets with Toggl alerts

Image courtesy of Cam Hoff, worksonpaper.ca (http://design.org/blog/infographic-time-keeping)

This week, the lovely KRED of Research Salad blogged about a cloud time-tracking service we use called Toggl. This nifty tool allows you to keep track of your project hours for multiple clients, with multiple rates via the web, a desktop app, or a smartphone app. It has a clean, simple interface and it makes it easy for you to have an overview of your team’s hours per project, or even export your hours to a billing service of your choice for easy reconciliation. Of the time-tracking systems I’ve used, it’s the least painful by far.

Today, they’ve released yet another feature that resolves one of the few peeves I have about time-tracking: it’s called workspace alarms and what it does is notify you when your project begins to reach your budgetted time allocation. This is practical when you’re doing contract work and need to keep on top of how much time you spend on a project; if you’re a librarian like KRED and you’re not tracking your time for a client but rather for yourself, this is useful for keeping tabs on the amount of time you’ve set aside for a type of activity. Finally, as a project manager, it’s always good to know how accurate your estimate is — alerts are a good way to keep you in check.

Happy time-tracking,

R.

Browser compatibility

Achieving good browser compatibility, or: How does your page look on all web browsers?

Browser compatibility

Image CC license from Flickr user sixsteps

If you’ve ever designed a website, it’s likely that you’ll be familiar with the frustration of browser compatibility. Yes, things like HTML and CSS are supposed to be standards and yet, every browser has its own way of rendering its pages.  Annoying, I know.

But perhaps this can turn your frown upside-down: Adobe has got a complimentary (read free) tool for checking your website on all browsers called BrowserLab. Better yet, it’s a web application so you can check how your page looks in IE without having to use Windows; it shows you how your page can look in several browsers and even several versions of these browsers, on OSX and Windows.

This is cool. Very cool, even.

One thing you may wish to do regardless of whether your page looks good in BrowserLab is test your HTML’s validity with the W3C markup validator. It’ll help catch anything incompatible with the standard; though you’ll quickly realize you can’t fix all of the issues without breaking your site’s functionality, it is helpful to know what may and may not work on all browsers (at least, from W3C’s point of view).

Happy coding,

Red.