Congratulations, Jeanette!
Jeanette, the godmother of my daughter Emilia, gave birth to her first baby this morning! It was a little girl, now named Nellie, who weighed 3,2 kgs and was 48 cm long.
Congratulations and all the best to you!
Jeanette, the godmother of my daughter Emilia, gave birth to her first baby this morning! It was a little girl, now named Nellie, who weighed 3,2 kgs and was 48 cm long.
Congratulations and all the best to you!
I’ve been wondering if image replacement and the promotion of it is really a good idea. But let’s start from the beginning: what is image replacement?
Image replacement is a common name for a technique to use images for headings and their likes from an external CSS file, as opposed to in the XHTML/HTML. The general approach is to hide the text content (one way or the other) of the element and instead show an image through CSS.
An example (Note: this is not the most sophisticated way to do it, but an easy one to get an overview of the basic idea):
HTML
<h1>
<span>Some text...</span>
<h1>
CSS
h1{
width: 138px;
height: 40px;
background:url(images/my-logo.png) no-repeat;
}
h1 span{
display: none;
}
Many more alternatives/techniques can be found in Dave Shea‘s Revised Image Replacement.
There are some general arguments why to use image replacement, and I though I’d respond to them here:
img
tag.img
elements are cached in most, if not all, major web browsers.alt
attribute on the img
tag.Another major reason for not using image replacement is that, to my knowledge, there’s still no way to handle the scenario where the user have a web browser setting with images off and CSS on, then they won’t see the text nor the image. There is, however, ways to use JavaScript-enhanced image replacement, but to me being dependant on JavaScript isn’t an option either.
So, use image replacement if you want to. I know I won’t (at least not until someone convinces me of any advantage of it over using an img
tag).
Last night, the original line-up of Black Sabbath played the Globe arena in Stockholm to an extatic crowd. I was there, of course, and really had a great time! They have inspired so many musicians and they’re the base of many, if not all, heavy metal bands, so seeing them live is an extraordinary happening.
Two things that really made it special:
I was in the standing part of the crowd, in the front enclosure rocking away just a couple of metres from the stage. Why? Since I like to go crazy at concerts, jumping up and down with a big mass of people. Having Ozzy throwing buckets of water on me and the others sure helped too! π
And, at the end of the concert, guitar god Iommi came to the center of the stage and threw out some picks. Everyone struggled and tried to get a pick. And I managed to get one! π
To generalize, there are three different standpoints web developers usually take when it comes to implementing JavaScript in a web page.
noscript
fallbacknoscript
tag with a text explaining for those who don’t have JavaScript that they can’t use it. Better.noscript
tagnoscript
tag, it’s redundant. Instead, include the necessary elements or warning texts in the code that’s initially loaded, and then use JavaScript to hide them. Best!So, no noscript
, m’kay?
Google Earth must be one of the coolest applications I’ve ever seen! With me loving to travel and a vast interest in seeing the world, this was a real eye-opener.
And just think about the implications! I really wonder where all this will end!
A tip: hold down the left mouse button to drag the map around, and the right mouse button while dragging up or down to zoom in and out.
PS. Thanks to Faruk for bringing this to my attention. DS.
What I want to touch with this post is how errors are handled when XHTML is served the way it should be. Let’s, for the sake of argument, say that we want to write and deliver XHTML (not wanting to turn this into a discussion whether we should write HTML or XHTML).
First, some general background information about how to send documents to the requesting web browser. It’s all about the media type type, described in XHTML Media Types:
text/html
MIME type.strict
, transitional
and frameset
, should be sent with the application/xhtml+xml
MIME type, but may be sent as text/html
when it conforms to Appendix C of the XHTML specification.application/xhtml+xml
MIME type; Should not be sent with the text/html
MIME type.So, what’s the difference? It’s that web pages sent as text/html
is interpreted as HTML while those sent as application/xhtml+xml is received as a form of XML. However, this does not apply to IE, because it doesn’t even understand the application/xhtml+xml
MIME type to begin with, but instead tries do download it as a file. So, no application/xhtml+xml
for IE.
Aside from IE‘s lack of support for it, and for what you need to consider described by Mark Pilgrim in his The Road to XHTML 2.0: MIME Types article, it means that when a web page is sent as application/xhtml+xml
while containing an well-formedness error, the page won’t render at all.
The only thing displayed will be an error message when such an error occurs. This is usually referred to as draconian error handling, and its history is told in The history of draconian error handling in XML.
My thoughts about this started partly by seeing many web developers writing XHTML 1.1 web pages and then send them as text/html
, and they were only using it because it was the latest thing, not for any features that XHTML 1.1 offers (this also goes for some CMS companies that have invalid XHTML 1.1 sent as text/html
as default in their page templates for customers to take after). Sigh…
It is also partly inspired by an e-mail that I got a couple of months ago, when Anne was kind enough to bring an error on my web site to my attention, with the hilarious subject line:
dude, someone fucked up your XHTML
What had happened was that Faruk Ates had a entered a comment to one of my posts where his XHTML had been messed up (probably because of some misinterpretation by my WordPress system), hence ending up breaking the well-formedness of my web site so it didn’t render at all.
Because of that, and when using it for major public web sites, I really wonder if that’s the optimal way to handle an error. Such a small thing as an unencoded ampersand (example: &
instead of &
) in a link’s href
attribute will result in the page not being well-formed, thus not rendered. Given the low quality of the CMSs out there, terrible output from many WYSIWYG editors, the “risk” (read:chance) of the code being valid and well-formed is smaller than of the code being incorrect. Many, many web sites out there don’t deliver well-formed code.
Personally, I agree with what Ben de Groot writes in his Markup in the Real World post. I prefer the advantages of XHTML when it comes to its syntax and what will be correct within it. However, Tommy once said to me that if you can’t guarantee valid XHTML you shouldn’t use. Generally, I see his point and think he’s right, but to strike the note Ben does, I can guarantee my part of it but there will always be factors like third party content providers, such as ad providers, sub-par tools for the web site’s administrator and so on. And for the reasons Ben mention, I’d still go for XHTML.
So, conclusively, I have to ask: do you think XHTML sent as text/html
is ok, when it follows the Appendix C of the XHTML specification? Do you agree with me that having a web site break and show nothing but an error if something’s not well-formed isn’t good business practice?
Vorsprung durch Webstandards has a nice collection where people declare their love to CSS. There’s also a short interview with yours truly there.
The time has come. JavaScript will rise again from its hidden trenches.
Jeremy Keith recently held his JavaScript presentation The Behaviour Layer at the @media conference in London, and from what I’ve heard and read, the crowd went Oooh and Aaah when he introduced the concept of the DOM and how to write unobtrusive JavaScript.
I reacted in two ways when I heard about his presentation and the crowd reaction:
For a long time, JavaScript has had a bad reputation that I don’t think it has deserved. It’s been based on lack of knowledge and common misconceptions that has spread like a virus. Let me meet some of them:
This belief is based on an old era, the so-called browser wars days, when IE 4/5 and Netscape 4 were fighting for domination. And we all know how that went…
Nowadays, if you write proper scripts according to the standardized DOM, also known as DOM scripting, you will target virtually every web browser in the market. For a comparison, you will even get more widespread support than CSS 2 has!
The other day, I was at a seminar by one of the leading CMS manufacturers in Sweden, and one question was if the next version of their product would stop being JavaScript dependant, as opposed to the previous version (this is largely due to using Microsoft.NET and what it generates), or if its scripts would at least work properly cross-browser. The reply:
The problem we had was to get the scripts to work in all web browsers
The way he saw it, the problem was in the web browsers, not in the product, which upset me. At that time, I had to step in to explain that the reason why their scripts didn’t work is because Microsoft.NET’s controls generate JavaScript that is based on the scripting model Microsoft introduced with IE 4, and that’s the reason why they didn’t work in any other web browser.
If Microsoft only had taken the proper time and decision to implement proper DOM scripting, which is supported in any major web browser, as well as from IE 5 and up on PCs, things would’ve been fine. So, let’s kill this misunderstanding once and for all that has flourished for a long time. Correct written scripts will work in any web browser.
JavaScript does rime well with accessibility, but some/many things that have been developed with JavaScript haven’t. The reason for this is web developers not being aware of how it should be done correctly. However, believe me, when it comes to writing JavaScripts every serious web developer focus as much on accessibility and standards as those people promoting it. And when JavaScript is used, be it for form validation on the client side to avoid unnecessary roundtrips, for dynamic menus or something else (why not an AJAX application?), a non-JavaScript alternative should always exist to cater for those where JavaScript isn’t a possibility.
So, how do you create a page with unobtrusive and accessible JavaScript? Humbly, I think my pictures page of my Rome trip web site is a pretty good example of how to enhance the experience for those where JavaScript is an alternative but still functional for those cases where JavaScript can’t be used.
It has a script that triggers when the page is loaded, only for those web browsers that support the document.getElementById
, and this is verified through object detection. It then adds onmouseover
and onmouseout
events to the small images. When they are hovered, they show a larger version of the current image. What this means is that the HTML isn’t cluttered with tons of event handlers and for those who don’t have JavaScript activated/have a web browser that doesn’t support it, the small images are also linked to the same larger version of it. It also means that the script won’t throw an error in web browsers that dont have the necessary support, thanks to object detection.
So now, get out there! Write your DOM-based JavaScripts that will enhance your web sites profusively!
Trust me, it’s a whole new level that will give you a big smile when you realize what you can accomplish! π
An old colleauge of mine, Oscar Berg, has started blogging. Oscar is well-experienced as a Business Analayst and Usability Designer, and I have to admire him for finding the time to start blogging with having two kids (and a third on the way).
He is one of the people behind the initial launch of the hugely successful hitta.se, and I actually wrote the very first HTML prototype of it. But unfortunately the company that owned the technical part of the project decided that they knew enough to code the interface themselves. If you look at the web site’s code, apparently they didn’t…
Anyway, for those of you interested in the business perspective on things, I strongly recommend a visit to his blog.
A common problem is that the Web Forms and Web Controls in ASP.NET generate invalid XHTML. Amongst these errors are invalid attributes and inline elements without a correct block element container, as well as good ol’ HTML comments in a script block, which prevents you from sending your XHTML with the application/xhtml+xml MIME type.
All these errors are automatically created when the page in question is being rendered in the web browser, meaning that even if you write flawless code you will still fail to get it valid.
To the rescue, some solutions to take care of this:
Another option can be to write your own fix and customize it after your specific needs. This should take something from a day and up, depending on where you set the bar.
Or maybe you’re one of the people that hope ASP.NET 2.0 will take care of all this? In that case, I recommend you reading Charl van Niekerk’s posts ASP.NET 2.0 – Part 1 and particularly ASP.NET 2.0 – Part 2.
ASP.NET 2.0 outputs lovely (*irony*) things like:
<form onsubmit="javascript:return WebForm _ OnSubmit();">
and
document.all ?
document.all["Login1_UserNameRequired"] :
document.getElementById("Login1_UserNameRequired")
(for the vast IE 4 support required?) and still HTML comments in scripts block.
And when it comes to semantics, structure and unobtrusive JavaScript, it’s a mess.
Don’t get me wrong, I think it’s great that it validates, but only validating doesn’t necessarily make it good code. Validation is just one of the components necessary for a good web page; there’s, for instance, also semantics, accessibility and unobtrusive JavaScript (or, as important, offering a way for it working without JavaScript as well; kind of connects back to accessibility).
My advice to you: some way or another, make the extra effort to make sure your XHTML from .NET is valid. It’s not that big of a deal, and it’s totally reusable in your next .NET-based project.
Do you have experience of above techniques to make it valid, or some other way to accomplish it? Please feel free to share!