It’s been six months since I dumped my $100/month satellite tv subscription in favour of a UHF antenna connected to a PC running Linux and MythTV and I’m pleased to announce that I have absolutely no regrets about the decision at all.
When I started talking about doing it, one of my co-workers cautioned, “You’re going to end up being the sysadmin for your tv, do you really want that?” Honestly, it was one of the things I feared would happen. I worried that a package upgrade would take it out, or that MythTV would occasionally take a vacation and require manual intervention, but none of that has happened.
The setup has been rock solid, required less hands-on maintenance and reboots than the dedicated PVR hardware it replaced, and has survived numerous package upgrades and even a full blown distro upgrade to Ubuntu 12.04 without missing a beat. Having a computer connected to your TV has other advantages too. You can play games with MythGame, browse the web, or even play some big-screen online poker from the comfort of the couch.
If Linux could play Netflix, it would be perfect, as it’s the only thing MythTV currently lacks that we use regularly to consume media
Updating statuses in Redmine via the API turned out to be harder than I expected. Taking the representation it sends you, and changing the appropriate values gets you strange behaviour:
- The server responds to the PUT request with 200 OK
- The status is not changed.
After a bunch of trial, error, and googling, I hit upon the solution:
Note:I tried posting the following on the Redmine topic about this, but it was rejected as spam, so I’m posting it here in the hopes that it’ll help someone.
I was able to get it to work using XML:
<?xml version='1.0' encoding='utf-8'?> <issue> <status_id>10</status_id> </issue>
Note that I had to also set the Content-Type header to text/xml (despite .xml already being in the url) for the the update to take.
When I send the equivalent JSON (and Content-Type: application/json) I get an HTTP 500.
To me it’s pretty bonkers that for an API claiming to be RESTful:
– you have to include .xml or .json in the url AND set the Content-Type. Content-Type alone should do the trick.
– the representation that you send to PUT an update is different than the one you GET from the server. (ie.
– the server responds 200 OK for requests where it does no work for whatever reason.
But anyways, that’s how I managed to update the status. Hope it helps someone.
There are several options for PVR/DVR software, and there were a few things I wanted it to do:
- Have mostly the same capabilities as my existing Bell Satellite PVR
- Replace the current big, dumb file server I use for media files.
- Be controlled by my Harmony Remote
- Be stable once up and running.
- Not require much in the way of ongoing care and feeding.
With those in mind, I really only considered using Windows Media Center on Windows and MythTV on Linux. My gut told me MythTV would be a bigger hassle to set up, but less overall hassle over time, whereas WMC would probably basically work out of the box, but at the very least, Windows’ need to frequently reboot itself for updates would become a pain. As well, the existing media server I use is Linux, so making this new box serve that role would be pretty straightforward.
So, I grabbed the latest Mythbuntu release (11.10) and installed it. Since I normally use Ubuntu when I can, and Mythbuntu is more or less Xbuntu with MythTV plonked on top, it seemed like the best place to start. I had already confirmed that my TV Tuner card had kernel support since 2.6.something, so I figured I was good to go.
A wild hairy yak appears
No so fast! As these things sometimes go, the tuner card wouldn’t show up in the MythTV backend setup. After googling, I learned people were saying it worked in Mythbuntu/Ubuntu 10.04 LTS, but people who started at 11.04 were having similar trouble. (The version I was using – 11.10 – was barely a few days old at that point). Reportedly, starting at 10.04 and upgrading through 10.10, to 11.04 would work fine. If that’s the case, I’m sure there’s is probably a way to just get it working in 11.x directly, but the chances of that taking longer than just doing a couple of distro upgrades was pretty high, so I wiped of 11.10, installed 10.04 and went through the steps to set up the tuner card. Success!
Next, I upgraded to 10.10, confirmed the card still worked…yup, then 11.04, still good. I couldn’t find anyone to confirm that the last step of upgrading to 11.10 would work, but I gave it a shot anyway. Huge success. With everything up to the latest version and working fine, the last hurdle was getting the Harmony remote set up to control it. All in all it took quite a while to get the current version of a thing running.
Yak number 2 arrives
The tuner card I’m using has a built-in infa-red receiver and transmitter and even came with a corresponding remote control, so I figured it would be a piece of cake teaching the Harmony to act like it. Alas, it was not to be. It turns out nobody bothered to write the Linux driver for that feature of the card, so there was no way to even use the IR features. Determined not to be thwarted at this point, I went out and spent $20 on a generic “Windows Media Center Remote” and corresponding USB IR receiver after a bit more research on what would actually work in Linux. The setup was pretty straightforward, and the Harmony remote already had it in its massive database of remotes it can emulate, so I just had to tweak the configurations a bit so that some of the buttons would perform that actions we’re all used to with our current PVR and we were all set.
Lastly, I set up a subscription to Schedules Direct so I’d have good guide data, set up some scheduled recordings. I’m happy to report, everything’s been running swimmingly for just over a week with no additional intervention required on my part. It also plays our existing library of digital video in various formats (avi/xvid, mkv, etc) without difficulty, all from the same interface, which is great. If the stability remains and we’re still happy with it in a few weeks, I’ll officially cut the cord and dance a small jig while I call up Bell and tell them to get stuffed.
As I talked about in Cutting the (space) cord, I’m currently pretty motivated to get rid of our satellite TV subscription and replace it with an over-the-air solution with little or no ongoing cost.
I’ve sorted out picking up the signals, but to be a true replacement for what we currently have I need to be able to record programs when they’re on and play them back at a more convenient time. We watch nearly all of our TV this way and going back to caring about the actual schedule just won’t work.
I searched around for stand-alone PVR appliances designed for OTA reception, and while a few exist (like the Tivo Premier or ChannelMaster 7000PAL but their support for Canadian stations’ guide information is limited or non-existent and, in the case of the ChannelMaster, their software seems pretty poorly maintained and buggy with complaints galore to be found online.
This left me with a couple of options. I could abandon the whole endeavour, or build my own PVR/Home Theatre PC to do my bidding. I priced out components, and it seemed pretty clear I could comfortably build a pretty good one for around $500, even with skyrocketing hard drive prices caused by floods in Thailand. This meant a 5-6 month payback period after I cancel the Bell subscription, which is totally fine. If we decide a few months down the road that this was a bad idea, I’ll have a decent PC I can use for something else for basically free. And honestly, it’s been several years since I put together a PC and I kind of missed it.
I was pleased to see that the scrappy Mini-ITX form factor is still alive and well. Not only that, you can build a pretty capable Intel Sandy Bridge based system on it as a lot of vendors offer Mini-ITX sized motherboards with one of the Socket 1155 chipsets. I planned to stuff the finished product into the living room entertainment center, so the smaller I could make it, the better
If you’re thinking really fancy, you can even get a chassis that makes the thing look like a typical home theatre component. Silverstone make, IMHO, the most convincingly home theatre-esque cases in this category, but they can be a bit spendy. I went with a cheaper InWin case for mine, because it seemed like it would blend in fine and didn’t look obviously like a computer shoved under the TV.
For the internals, I went with an Intel Pentium G840 CPU, which is basically a Core i3 without hyperthreading and with the AES and virtualization extensions disabled. I figured I’d start there and in the unlikely event that it turned out to be too much of a weakling, I’d could pop in an i5 or something more beefy later. It also has the benefit of an on-die GPU, meaning I can use the single PCI-Express slot on the Gigabyte motherboard for the Dual Tuner TV capture card that will make all the magic happen. Add in a couple of hard drives, 8 GB of RAM, a cheap wireless keyboard and a Couch Mouse (who knew that was a thing?) and you’ve got yourself a PC. (I didn’t bother with an optical drive. We already have a Blu-Ray player we rarely use). Hook it to your TV and stereo system using the motherboard’s HDMI and optical SPDIF connectors, and you can (almost) call it a Home Theatre PC.
Of course, it still needs an operating system and some software to make it do stuff, so I’ll write about that next.
I subscribed to Netflix streaming service basically the day it launched in Canada. Sure, the selection isn’t great, but the $8/month price-tag makes places it firmly in the “good value for money” category in my mind.
One thing that Netflix brought into sharp contrast was the amount we spend on satellite TV. We’re a pretty busy family, so we don’t watch a lot of TV. What we do watch is almost exclusively recorded by our PVR when it airs and watched later when we have time in our schedule. Channel-surfing and the watching of random crap almost never happens, and when it does, it usually ends up being searching what’s available on Netflix rather than channel-flipping. As a result, paying over $100/month to Bell for satellite TV at some point tipped into the “insufficient value for money” category.
After a shenanigan with Bell where cancelling my Internet service with them in July resulted in further bills and hassle lasting through October (long story), I became determined to stop giving them money as quickly as possible. I could have signed up for cable TV instead, which would have stopped the money hose I was pointing at Bell, but wouldn’t have improved the value-for-money situation.
Ever since there has been TV, it has been broadcast over-the-air. But over the past few years it has all shifted to digital, (mostly) high-def broadcasts and the old VHF analog taps have been turned off, making millions of sets of rabbit-ears obsolete and ending the snowy, static-plagued VHF broadcasts most people associate with them.
The promise of modern OTA TV is pretty compelling:
- Free (as in beer) network tv.
- Digital HD signals, not further compressed by cable or satellite providers.
- No sim-subbing on US networks (if you live close enough to the border to get them).
When I started researching it, I was a bit daunted. People were talking about different types of antennas, as well as rotors, pre-amplifiers, and distribution amplifiers. It all seemed a bit much, but after reading a bit more, it became clear that for a lot of people who are motivated to talk about this sort of thing on the Internet at length, this is a game. The game is “pull in as many channels as possible” and they are willing to go to extreme lengths to do so. I don’t care quite as much about getting every theoretically possible channel, so I figured a less-extreme approach would serve my needs.
Based on an analysis by the amazing TVFool.com website, it seemed pretty clear that just putting up a decent antenna would get me a lot of the way there. There are some other channels out there I could try to get later with a bit more effort, but I’ll make that future-Jason’s problem.
So, last weekend I put up a ChannelMaster 4221HD UHF antenna on the antenna mast that conveniently already existed next to my house (it came with it!), aimed it roughly at 121 degrees with the help of the compass app on my iPhone and plugged it in to see what the TV could pull in.
The results were even better than I expected. I’m getting all of the Canadian networks that broadcast out of Toronto and Hamilton, and most of what comes out of Buffalo (no NBC at all, and ABC is dodgy at best). Even without NBC and ABC, it covers all of the shows we normally watch, plus should get us special-events like the Oscars (via the Canadian networks if necessary).
The next phase of being able to really cut the cord (which actually comes from space) is the ability to record the stuff when it’s on and play it back later. For me that means a home-theatre PC project, which is in progress. Update: More on that here.
If you’re using Hammock for REST under Mono and attempt to talk to an HTTPS endpoint, you may be surprised to see that it doesn’t appear to work. This is because the HTTPS negotiation is failing due to the fact that Mono doesn’t trust the root certificate of the site. (nor any other root certificates out of the box).
There are two main reasons not to include “defaults” root certificates in Mono.
- Digital certificates are, like many digital files, copyrightable. This means that there are restrictions on how the roots certificates can be distributed.
- We aren’t in the business to decide on who you are going to trust. Certificates Authorities exists all around the world. The ones best suited to you aren’t necessarily the ones best suited for everybody else and having a terribly long list of “trusted” roots isn’t a very secure solution.
There are a few solutions to this that you can employ. You can:
- Seek out, download and install the individual root certificates that matter to you.
- Use the mozroots tool to download all of Mozilla’s trusted root certicicates.
- Implement your own ICertificatePolicy in code to determine which certificates to accept.
If you want to write your own policy, you need to hook it into the ServicePointManager class:
System.Net.ServicePointManager.CertificatePolicy = new AcceptDodgyCertificatePolicy();
Be aware, that setting affects all of the web requests in your app that use the System.Net stack either explicitly or under the covers (as Hammock does), so make sure you know what you’re doing before you go messing with it.
Before I even get started, I never use “Mother’s Maiden Name” as a security question if I can avoid it, and if that’s the only option, I generally lie because it’s pretty easy to find out anyway. So, I’m hopefully not giving up anything that might facilitate identity theft in this post.
I’ve been digging in to my ancestry a bit, because, being Canadian, the topic of ones ancestral nationality sometimes comes up and to be honest, I didn’t really know what mine was. I knew one line was primarily Scottish, but there was much still uncovered. As it turns out I’m a mixed bag of various western Europeans and my ancestors arrived in Canada a minimum of 4 generations ago depending on which line you follow so I’m going with “Canadian” as my nationality from here on out. It’s the most accurate.
All this got me thinking, though, there are a lot of surnames in my ancestry that fell by the wayside because of the tradition of women taking their husband’s name. But, I figure, one name is as good as the other, so my historically accurate (near as I can tell) full name, giving full credit to all* of the women and men who contributed to my genetic make up, is….
This is going to complicate my twitter handle
*Obviously not all, but as many as I’ve been able to track down in the last couple of weeks.
** I’m blaming this one for my distaste of Christmas music.
Every now and then I decide I want to try running Linux full-time on my primary computer (currently a Dell Studio 15 laptop). I’ve made efforts before, and I usually can’t stick with it. The reasons usually being some combination of:
- Other family members have to use the computer and revolt at the change
- Some critical piece of hardware doesn’t work
- Some necessary software isn’t available
Since I now develop software on a Mac at work, I’ve had a great opportunity to re-familiarize myself with Bash and all of the awesomeness it provides, and found switching back to Windows on my laptop to be kind of a drag with it’s single desktop workspace, no built-in SSH support, anemic command-line (yeah, I know about PowerShell, I don’t care)
So, I decided to make the switch again a couple of weeks ago and so far it’s been a relatively smooth transition. I no longer share my computer with other family members (we each have our own now), hardware support in Linux, particularly Ubuntu is pretty solid, and I can use
Sun’s Oracle’s Virtual Box to host a guest Windows OS for the odd piece of software that doesn’t have a decent Linux alternative (*cough* iTunes *cough*).
I’m not a “compile-your-own-kernel” type of guy as I would rather have my computer serve my needs than vice-versa, so I went with Ubuntu 10.04 for easy setup. I was happy that everything worked pretty much right off the hop, even the non-standard Dell keys for volume/brightness/wifi/etc worked as-advertised.
I had a bit of trouble with sleep & hibernate. It would just hang instead of going to sleep and required a hard reboot to come back to life. A bit of Googling narrowed it down to the fact that I keep an SD card in the laptop’s SD slot, and that causes some problems with the power management (unless the card’s filesystem is unmounted first). That was easily solved with a shell script that automatically unmounts and remounts the card before and after sleep, and it’s been working like a champ ever since.
I’d also like to give a shout-out to the developers of RedCar who’ve done a reasonably decent job of replicating the TextMate editor in a cross-platformy way. It’s got a ways to go yet, but it’s off to a good start.
Lastly, I’m just telling you what works for me. I don’t care what you use. Do not read this as proselytizing, it ain’t.
Back in January I bought a shiny new i7 Quad-Core laptop. It came with a 500GB disk drive, but I wanted it to perform as fast as possible, so after the laptop arrived I swapped out the hard disk for a 60GB OCZ Vertex SSD. Long-term, I knew 60GB wasn’t going to be enough space for me, but I still had the 500GB drive which I put in an enclosure to use as an external drive (laptop has eSATA as well, so it willl perform at its full speed) to store large, infrequently accessed files.
By the way, the SSD made my already fast laptop fast. Like amazingly fast. I wouldn’t be without it.
So, fast forward several months, and my 60GB SSD is, as I expected, running low on space. Since SSD is relatively new technology, the prices are falling pretty fast, so every extra month I can delay buying its replacement means saving money or getting more capacity for the same money on the next one.
I could move some stuff permanently to the external drive, but I want to use it as little as possible, since it’s a pain to get out, plug in, etc.
Enter my trick:
My laptop, like most, has an SD card slot. I also happened to have a nice 32GB Class 10 SD card I wasn’t using for anything important. I put that in to my laptop and, using WinDirStat tracked down the biggest space hogs on the main drive.
One thing that jumped out at me quickly was the folder C:\windows\installer Apparently that’s the cache of all the MSI-based installers on the system, and they need to be there if you ever want to be able to uninstall the stuff. On my system this accounted for a bit over 3GB of space. Not a huge amount, but still 5% of my total capacity.
You can’t officially move this folder, but you can use an NTFS junction to make Windows think it’s still there when it’s actually somewhere else (in my case, on that SD card) and it’ll be none the wiser. (If you haven’t used junctions or hard-links in Windows, check out the command mklink from an administrator level command prompt for details).
I spent an afternoon playing around with moving different stuff on and off the SD card with hardlinks and junctions and wound up freeing up about 10GB of space on my SSD, which should buy me a few more months.
Stuff I was successful in moving without noticeably impacting performance:
2) The Streets & Trips data files.
4) My Dropbox Folder
5) My Documents
6) Various and sundry single files.
The SD card is decidedly slower than pretty much anything else, so you’ll want to limit what you move to infrequently used files, or large numbers of very small files.
Stuff I moved, but then moved back due to crappy performance:
1) My browser’s data files (cache, etc).
2) My email client’s data files.
Stuff I didn’t even try to move because it seems like a recipe for disaster or terrible performance:
1) The Windows Swap file.
3) Virtual machine virtual hard drives
And of course, if you do this, you have to leave the SD card in all of the time if you want it to be fully transparent and error-free.
Lastly, I’m sure none of this is officially sanctioned, so if you do it and it ruins your data, your computer, or your life, I’m not responsible.
I’m sure I’ll be upgrading this disk soon enough, but in the meantime, I’m happy to dump off what I can onto another piece of internal storage that’s half the size of my primary disk.
See our joint open letter to the TweetSharp community here. What follows are my own personal thoughts regarding our decision.
As one of the two authors of TweetSharp, I’ve been fortunate enough to contribute to a project that, by any measure, is popular and well used. It’s been downloaded thousands of times by developers and used in applications all over the world (and at least once from orbit on the ISS). We consistently see it recommended by other developers on Twitter and Stack Overflow and other forums where developers gather. This makes me feel happy, and proud of what we have accomplished.
Knowing that a large user base depended on our efforts, I personally felt a responsibility to deal with important issues quickly, to respond to new Twitter features and get them in the library as fast as possible, and to help new users get up to speed without too much difficulty.
Either because of the hands-on approach, or other factors, TweetSharp has begun to feel more like our product than anything else. No one but Daniel and me have contributed in a significant way eventually it began to feel more like a vendor/customer relationship than a community of users trying to build something together.
If you examine the commit history for the project, you’ll see that most of my checkins happen between Midnight and 3am in my timezone, and on weekends. This is because I have a full-time job and a family, so my most productive time is late in the evening when everyone else is asleep and I can work without distraction. In the long-run, this kind of effort isn’t sustainable. With user-streams, annotations, and who knows what other Twitter features coming on board in the next little while and beyond, the workload isn’t going to diminish, and frankly, I can’t continue working as much as I have on TweetSharp while maintaining my commitments to my job and my family, and I can’t give up my family or my income to work on a product and give it away for free.
As such, Dan and I talked a lot and thought a lot about how to continue. Attempting to commercialize it either by selling support contracts or making it closed-source and selling licenses was discussed, as was trying to find an ongoing commercial sponsor that would allow one or both of us to continue on a full-time basis. Ultimately, we decided we’d let the community determine its fate.
I get the feeling, and I’ll be delighted to be proven wrong, that your average .Net developer’s interest in open source stops shortly after the price tag. Maybe we haven’t done a good enough job making what we consider to be implicit (that open-source projects encourage and accept community contributions), explicit. (Maybe I should have dangled ridiculous carrots earlier and more often.) Or maybe .Net devs are so accustomed to being handed code and guidance from Microsoft and “the Internet” in general that they don’t even bother to realize that behind a lot of libraries are one or two people giving up their spare time to make something useful.
Whether TweetSharp thrives or whithers on the vine will be up to its user base. Personally I’m happy to continue in a custodial role managing a steady stream of community contributions or, should some benefactor decide there’s cause enough to fund the project’s continuation that way, as a full-time developer on the project, but I hope I’ve seen my last 2:30am checkin for a while.