Did you know you can make a multi-camera video using Quicktime as one of the cameras?
Tomorrow I’ll be filming some presentations to a community group. There will be a few speakers – but the format will generally be someone speaking from a lectern to a seated audience with some slides projected on a screen at the front of the room.
There are lots of ways to film this sort of thing – but I tend to try to do it with as small an amount of equipment to carry as I can manage.
I’ve got a couple of tricks that can improve your finished video, and make shooting and editing little easier.
Let’s assume you have one camera (we’ll get to multi camera in a bit). Sit the camera at the back of the room so that you capture the speaker across the heads of the audience. This will have your speaker facing the camera most of the time. Frame the shot so that you have the head and shoulders of the person in the shot most of the time – a fairly tight shot. Don’t worry about getting the slides in the shot.
If the laptop that is running the projector is a Mac, open Quicktime and do File -> New Screen Recording.
Click the small white down arrow, and make sure ‘Built in mic’ is selected at the audio source.
Hit the record button, and set it to record the whole screen (whichever screen the slides will be displayed on). You are now recording the slides, just as the are presented – including transitions, animations video playback – the whole thing.
If you are using a Windows PC, try CamStudio which is free software. I’m sure there are good alternatives for *nix systems (post a comment if you know of one).
Record the presentation. At the end, stop the screen recording on the laptop. Save the screen recording.
If you have a 2nd camera I’d suggest you set it capture a shot of the speaker and projector screen together – so you can use footage of the speaker pointing and interacting with the screen if needed. The other useful spot can be near the front of the room – possible over the presenters shoulder in a fairly wide shot. This will give you a view of the room – which can provide a context for the viewers of your video. It may also provide an angle you can use if people ask questions. Either way – you’ll probably want to use this second camera angle sparingly.
When you are ready to edit the video of the presentation – you can take the screen recording and treat it like camera footage. If you are using Final Cut, import the camera footage and the screencast, select these clips and then click File -> New -> Multicam clip. The standard options should work ok.
Once its done you’ll have each cameras footage and the screen recording synched via the audio. Do Shift+Command+7 in Final Cut to open the Angle Viewer. Before you start playback, make sure you have the angle with the best audio set to provide the audio track – then use the ‘video only switching’ tool to choose which angle you want to show at any point in the video.
Good audio is important – if you’re new to this, go and look for some tutorials on recording audio. You can record an audio track to a dedicated recorder like a Zoom H4N and include it in your multitrack, and set it as the audio track for your video.
Multi camera video editing is easier after you’ve seen some demos. Search on Youtube for examples.
I’ve worked out how to do faster uploads to Youtube via Amazon S3 and I’m pretty excited about it.
Over the last few months, I’ve been uploading even more video to youtube that usual. I’ve probably uploaded about 10+ hours of TEDxCanberra video each year for the last few years. Since the start of 2014 I’ve been producing a lot of video for canberralive.act.gov.au – probably 50 hours or so.
And this isn’t small, low quality video – its normally 1080HD in a fairly high bitrate. The video I published today was 16:26 minutes and 1.4Gb in file size.
Each of these was finished just an hour or two before they were due to go online for a national audience to watch (no pressure!).
Once the edit is done, rendering the video file takes a while – but I’ve found the upload time to Youtube has been quite varied. Sometimes quite fast, other times painfully slow. And this is across a range of internet connection types and speed.
On a 100/40mbps NBN connection I had access to, uploads of video files that were 3-4 gig in size could take hours. On the fast internet in my workplace (about 150/60mbps I think) these large files could again take hours to upload to youtube.
I can’t say I did any exhaustive testing to find out why – I put it down to brewer problems, internet bottleneck, or throttling at Youtube’s end of the connection.
I was wrong!
Due to the time crunch for the Govhack videos, I asked a geek friend @Maxious if we had access to a fast cloud file server where I could upload the video to provide a second way to distribute the video to the sites around the county. He was very quick to point me to the Amazon S3 instance that he had setup and provide a login.
Uploading the 800mb Govhack Open video took about 5 minutes. Upload to youtube had taken about 2 hours.
The opportunity to save so much time in my video upload workload is too much to ignore – but getting video onto S3 only helps so much. What I need to do is test what the upload speed is to move video files to Youtube from Amazon S3. Even if this is no faster (i.e. youtube are the bottleneck) it would allow me to get the files off my network and PC quickly and let me leave work for the day.
I have done some poking around, and come up with a clunky way to upload to youtube from S3. I’ll post the basic steps below, but I think there is a lot of room to improve this. These steps assume you know your way around S3 and Amazon EC2. They make use of the youtube-upload python library which is pretty easy to get running using the installation instructions.
Upload video to youtube from S3
Transfer your video onto S3. I’m using a client called Cyberduck
Make the video publicly readable – copy the url
Login to your EC2 instance (I created a new one based on the Debian Wheezy IAM)
Copy your video file from your S3 bucket with a command like
Check your youtube account – you should see the movie has been uploaded :)
In the testing I did of the above process the 800mb video file imported into youtube in about 8 minutes and then needed about 10 more minutes to encode the video ready for viewing.
I’ll do more testing of this but it looks like I can reduce my youtube publishing time for a file of this size from about an hour to about 10-12 minutes depending on the speed of my connection.
S3 is probably not needed. I could simply move the file to my EC2 instance, but I can’t SCP or SSH from my corporate network so its hard to test. Using S3 is a reasonable workaround for now.
If using S3, there are AWS APIs that might make the wget step redundant (its pretty lame to just copy the file to S3 and then copy again to EC2 – but as a test, I can live with it).
I found the user name / password for the youtube-upload script worked best when I used google’s two-factor authentication and setup a profile just for this script.
It would be possible to build a web front end for this type of process.
If not a web front end, it would be easy to build a script that automated this process via cron. I’d need to look into how the Title of the video could be set – but this could use a temp title such as the date + filename etc. Then remove the video file when done, keep a log etc.
If it had a web front end, it would make it easier for lay people to use this approach, add titles, check for success/problems etc.
There is probably a project out there that already does all this but a 5min search on github didn’t turn anything up :)
I’ll try and refine this a little over the next week or two and post an update.
For the last couple of years, Dad has been using a fairly ancient hand cranked coffee grinder that he picked up cheap at a garage sale or something.
We love the novelty of it – and the feeling that you are ‘hand crafting’ each brew of coffee, but the ground coffee is quite coarse and a little uneven.
Enter the Christmas present – a Hario Mini manual coffee grinder. This has fancy things like ceramic burr grinding plates etc, and fits into a nice compact box that could easily be taken on holiday etc. The problem? It takes ages to grind the coffee. Probably 2-3 minutes of grinding to make enough for a 600ml french press.
Remove the hand crank normally used with the grinder, and replace with battery powered cordless drill. The drill chuck fits nicely onto the crank point – tighten the chuck and away you go. In about 20 seconds, plenty of evenly ground fresh coffee.
At TEDxCanberra, we create email aliases for members of our core team. These aliases are normally redirected to gmail accounts that team members already have.
Here are some details showing how to setup the email alias in gmail so that you can send and receive from your alias email address – in this case, I’m setting up a new alias called email@example.com .
First, login to Gmail and open the settings screen.
Click on the “Accounts and Import” tab, and then the “Add another email address you own”
Fill in your name and the new email address that you wish to setup, then click “Next Step >>”
Just go with the easy option. Then “Next Step >>”.
Hit “Send Verification”.
You can probably just close the window below – but leave it open for a moment. Go check your email.
In a moment or two, you should see a new email in your inbox from the Gmail team. Open the email.
Click the big link in the middle of the email :)
Now, if you go look at the “Accounts and Import” tab in the settings, you’ll see that you have another email address listed.
And now, when you compose a new email, you have a drop down menu in the From area where you can choose which email address to send from.
Also, when you reply to an email send to your new email alias it will automatically use the new address.
When you are ready to go a step further, you can also setup a signature just for use with your new email alias. Gmail is clever :)
When thinking about the staging and projector technology for TEDxCanberra in 2012, we understood that we had a much more impressive theatre and stage due to the move from the National Library of Australia (capacity 300) to the Canberra Theatre Centre (capacity 600). Rather than working on a small lecture theatre type stage (raised about 20cm in front of 300 seats on a gentle incline) we would be in a ‘proper theatre’ with a much larger stage space (about 10m wide by 8m deep, raised 120cm to about 600 seats in a mix of stalls and galleries).
The other big change was the interior height of the new venue. When combined with the full-on theatre rigging and lighting system – there would be opportunities to hang things above the stage, and probably have them move in and out during the show if we needed.
Given that we had only a small budget, but lots of time – what would we do? Should we hang lots of interesting stuff in the air above the speakers? Bicycles, bookshelves, patterns & shapes? We had the option to do something like this – but I was also hoping to avoid buying, storing, moving, installing, removing lots of material. In addition to this – while we are a creative bunch, I’m not aware that our team has any art/sculpture types that might have been able to produce something with a look that fit with our theme for this year – ‘Optimistic Challenge’.
About 6 months before our event, we saw the impressive staging at the TEDx Organisers event in Doha
and I immediately thought this was an approach we could scale down and use at TEDxCanberra. It appealed for a couple of reasons;
would not need to buy/store/move/install lots of stage decoration
while we don’t have any physical artists in the team, we do have a number of very talented digital designers who could produce media
we are pretty comfortable using computers and learning new software
To scale things down to something we could manage, we decided to go with a ‘double wide’ rectangular screen. This wouldn’t be as visually impressive as the screen used at Doha, but it would be quite different from the typical screens used at events and that would be enough to make it special and interesting.
So I started reading a lot about projector systems, beginning with the term ‘edge blending’ and ‘hippotizer’ which is the brand of ‘media server’ used at the Doha event. Its worth noting at this point that while we do have a budget for staging the show, we aim to do as much as we can through donation or volunteering rather than bringing in paid professionals. A couple of quick calls, and it was starting to look like renting a professional media server was going to be way out of our budget – not just to have it for the days of the show, but for the weeks we would want to have in advance to build the show and learn how to operate the system.
Next step of exploration was looking at software that could be run on a PC or our own. And like the Doha example, we wanted a system where we could incorporate video of the person speaking while they were on stage, along with a slidedeck (powerpoint/keynote) and some nice visual theming. Rather than using physical props and stage decorations – this would be the space we would use to dress up the stage and produce an great looking stage.
First steps – what is edgeblending?
Edge blending is when you use two or more projectors to create bigger projected image with no visible ‘joins’ in the projected image. This lets you project onto screens that are different from the typical 4:3 or 16:9 shapes and lets you project high resolution images onto large screens. To do edge blending using two HD projectors, would result in a canvas size of 3840×1080 (1920×2) – or at least that is what I thought at first. Once the edge blending overlap occurs, this reduces to something like 3440×1080.
Edge blending is interesting – once its working, the media server computer is outputting 2x video signals each with a resolution of 1920×1080, but when they are projected onto the screen, the two images overlap. If they overlap by 400px the final canvas size is 3440×1080 (400px narrower than it would have been if you just put the two projected images end to end).
Hunting around for some suitable media server software, I was looking for something with edge blending in the feature set and found Arkaos Grand VJ. We used this right up until two weeks out from the show, when we realised it wouldn’t do one of the things we needed – but more on that later. Grand VJ allows for a number of media files and video sources to be mixed together – inlcluding footage from web cams, or in our case video capture cards.
My first plan was to use two Black Magic Intensity Pro cards that would be installed in the PC. Each of these can take a HDMI input which could be overlayed on a nice looking background and projected onto our big screen. Using software like this would allow us to create ‘scenes’ that we could switch between for differenct parts of the program. Eg, during a presentation we would show a video feed of the speaker on one part of the screen, and the slide deck on the other.
For video playback, we could place the image in the middle of the screen. Each presenter scene could also be preloaded with the name of the speaker, their slide deck and maybe a photo.
Here is the first in the series of videos that I made to show my team how the idea was progressing.
Feedback from the team was good – the concept was worth pursuing. You can see in this video that the Grand VJ software randomly inserts a large DEMO text on the screen from time to time – other than this its pretty much fully functional which is nice.
Next step was to add the capability to take a video signal from a laptop with the speaker’s slide deck so that it could be incorporated into the projection. I ordered a Blackmagic Intensity Pro PCI-e card from Videoguys to test that it would be compatible with my hackintosh and to do some further testing. The card worked fine, and I was able to take HDMI signal from a couple of sources. Here is a test video showing video signal from an XBOX360.
The Intensity Pro card does a great job of capturing via HDMI, but it is very fussy about the incoming signal. The Intensity Pro has to be configured to exactly match the incoming signal ie 720p at 59.9Hz for XBOX 360, 1080p 30Hz for iPad 3 and 1080i 59.94 for a Canon 7d. If these settings aren’t right, you don’t get a video signal at all. I also demoed how we might display the TED talk videos that are part of the TEDx program.
A change of direction
The more time we spent with Grand VJ – the less confident we were about it. Whenever we closed the program, it would give us an error message. And there were some actions we could do that would reliably cause it to crash. The clincher was discovering that it couldn’t make use of two black magic cards simultaneously. We then considered a number of other programs and decided to go with VDMX which is a very flexible media server and show control program. VDMX is a very open system and can take inputs from a number of places, such as two Blackmagic cards but also from other software running on the machine.
The image projected was made up of a number of layers. Anyone who has worked with photoshop or similar systems will understand the idea. An image is created by laying image layers one on top of the other. Most layers have areas that are transparent allowing the layer below to be seen. We used the slow circle animation as our lowest level, on top of this we overlaid a transparent image with the name of the speaker and event logo. The next layer was the speaker’s slide deck. Next was the video image of the speaker live from stage. VDMX allows for some sophisticated uses and preset configurations. We made use of its edge blending capabilities. And were hoping to do more with presets and a control interface (possibly OSC) but time was not on our side. We had also hoped to do some very impressive things with Processing to do realtime data visualisation at the event based on attendees and other ‘on the day’ inputs – but we will need to try that next time.
Keeping with the no-budget theme, we decided to make our own screen. Our excellent roadie/technical operator constructed the screen out of timber and mdf panels painted white (with a hint of grey apparently). The screen was constructed in six sections that were bolted together during bump in, the day before the show – these were then flown (hung) from the rigging system in the theatre so that they floated in the air near the back of the stage. Behind this was the large white cyclorama regularly used at the theatre – we lit this with some red lights for effect.
As mentioned above in the section on edge blending, the early thinking on the screen was the it should be double the width of a widescreen projection. Widescreen is normally 16:9 – so we though 32:9 would be the ratio of width to height. As it turns out, this isn’t the case as when the two projectors overlap for edge blending, you lose some of their width. While we would be send the projectors 2x 1920×1080 signals, the screen size will be more like 3240×1080 (3:1 ratio) rather than 3840×1080 (32:9 ratio). It can be a bit hard to get your head around it – here is a diagram.
The screen was great – and is something that we hope to use again or lend to other events. The only thing I would try to improve for next time is the straightness of the screen. It had a slight bow which made it difficult to aim projectors at. This would have been fixed easily by adding some straight reinforcing to the rear of the screen.
How it happened on the day
These ideas were all pretty good – but in the end we did something slightly different – though it achieved more or less the same look.
The advice from the company that hired the projectors to us was that there is a delay of a few frames between the speaker on stage and the image of the speaker projected onto the screen. This is caused by different parts of the signal chain needing a few milliseconds to do its thing. The camera -> switching gear -> projector – and anything else in the chain. A long delay can be noticeable/distracting for the audience.
In testing, the Blackmagic card and processing through VDMX seemed to introduce a delay of a few frames – probably a little more than would have been the case with a dedicated hardware switching console. The projectors that we hired had the capability to do a picture in picture (PIP) so in the end we used this to put the footage of the speakers into the projection and the blackmagic cards weren’t used for this.This added some extra complexity for our operators as it meant switching the PIP on and off as required. If we had used VDMX these changes could have been automated but the frame delay may have been noticeable.
Using the PIP function on the projectors also meant that there was one less thing relying on the media server computer which was a positive. A single black magic card was used to capture the speakers slidedeck from a macbook pro running keynote. This was carried via a HDMI signal from the macbook to the Blackmagic card. We used the monitor output on the Blackmagic card to feed into a HDMI splitter and then to the throwback screen on stage and to the video streaming system. If you want to see the finished product, take a look at this talk (Brian is pretty great too).
Coming up in a future post – How we did the video shoot / live stream / edit
Ok – so this won’t be a rant about poor customer service or business practise – but goodness me, if MSY in Fyshwick in Canberra isn’t aware of the things they can do to improve their business its about time someone made some suggestions. Consider this a business improvement rant.
I’ve been buying things from MSY in Fyshwick for about a year. If you’ve never been there – they sell PC components like hard drives, graphics cards, motherboards and the other parts that people need when they are assembling their own PC rather than buying a pre-made one off the shelf in a shop like JB hifi or Harvey Norman.
People typically buy PC components when they want to upgrade an existing computer, or when they are building a machine to a particular spec. Its often possible to get a more powerful computer for less money if you don’t mind putting it together yourself. Its not the cheapest way to get a PC (to do that, pick up a cheap laptop for $399 from JB) but for the tech capable or gamers chasing the fastest performance – its often the way to go.
Most Saturdays you can find a line of customers stretching out the door of their shop. They have no problem attracting people with their low prices and range of stock. Their preferred ordering system seems to be a preorder via the website with payment and pickup at the store.
Here is where it falls down. For the entire time that I have been trying to shop at MSY, they have had the slowest customer service / checkout process I could possibly imagine. They have two staff working at checkouts at the front of the store, and people line up to buy their order. If the phone rings, they stop talking to customers in the store and deal with the phone call. Often these phone calls take 5-10 minutes to resolve as they offer advice on which component will work in a particular system or track down stock to see if it is available in the store. The face to face customers often have similar questions that can take 5-10 minutes to resolve before they buy their parts and leave.
Stock in the store also seems to be poorly managed. For a long time, it was hard to get into the shop past the piles of unpacked inventory stacked around the floor area. They have changed the store layout since then, but at the same time they stopped customers from self-serving from a selection of stock – adding to the demands on the two available staff.
This is why the line of people is stretching out the door – the two staff can only process about 20 customers or phone enquiries per hour between them.
Make it better
The painful thing is that there are such obvious solutions. In brief – if it was my store, here is what I would do.
More staff – particularly on Saturday mornings. These could be picking orders of the shelf, ready for payment/collection by customers. These staff could also unpack incoming stock and manage the inventory system.
A collection desk – where only pre-arranged order can be collected. This would incentivise people to place orders online
Up to date stock levels on the website so that customers don’t waste time coming to the store to find that stock is actually not available (as I did today after waiting for 15 minutes)
If they added more staff they would more than cover the the extra wages. The staff could be high school kids with an interest in computers – they would be happy to be paid the same as their friends working in retail or hospitality. Hourly rate for an 18yr old is about $15. I’m not sure what the average profit is on each sale at MSY in Fyshwick but if its $1 and they handle 15 sales on the collection desk then they have paid for themselves – and more importantly, MSY will be meeting at least the basic expectations of customers that they won’t need to wait inline for ages.
In addition to the collection desk, I’d also suggest they setup an express enquiry desk where customers can buy without a pre order. Everything else goes to the two sales staff they have already.
The reason I’m writing this now is not just that I wasted a chunk of my morning only to find that they had no stock of any SSD drive in the size I needed – but because every time I stand in line at the shop (or leave before I get to the front of the line) I think – this is so stupid – they are not making nearly as many sales as they could if they would only add a couple more staff. It just such an obvious a missed opportunity.
So come on MSY, put on a couple of school age people on the weekends and profit!
I made this simple pool noodle parking sensor to help when parking my car in a tight spot. About $10 worth of materials and takes about 5 minutes to assemble.
We are parking a 2nd car in our garage these days, and its a fairly tight fit. The cars need to be parked nose to tail with only a little room between them. The car that normally parks 2nd has terrible visibility of the front edge of the car, so getting close enough meant having a 2nd person guide the car into place, or getting out of the car a couple of times to check the size of the gap.
Enter the pool noodle parking sensor. I had two pool noodles stored in the garage, and a couple of timber offcuts. Join them together with some wire and they make a great indicator.
As the car closes the gap, it touches the pool noodle before it hits the car. Because the pool noodle is flexible, it waves around and gives a clear sign that the car has touched it. Nice and simple, and no chance of scratching either car :)
I’m quite interested in how startups can work and succeed in Australia. And if ever there was a team of people with the right mix and quantity of experience behind a startup – this would be it. Its going to be fascinating and exciting to watch. I’ve only been aware of the Light by Moore’s Cloud kickstarter project since the night it was launched, and having looked at the team of people, I hate to say my first impression was “that’s it?” I mean, sure its a light that people can control with their phone/tablet – or program for other stuff – but is that enough to make this a novel and interesting product?
No matter how good Light is, how is a lamp worth any hype at all? On top of that I think there is a rising skepticism about kickstarter projects that aim to raise a big amount of money and turn out to be vapourware.
Honestly – thought about it and I’m still not sure .. but having read some of Mark’s blog posts today, I think the most interesting part of this project is the way they are running their business.
One of the ideas I have been interested in for a while is how openness can form part of the DNA of an organisation, leading to a new way of doing business and achieving good things – and as it turns out, this is exactly how the Moore’s Cloud is planning to operate. And again, looking at the team – this makes sense. Having briefly met Mark and Kate and following them online for a while, its no surprise that they are creating something that is different in a fundamental way – even if it is not as easy as following the typical path for producing a new product.
This is the ‘exciting and new’ that I’m more interested in :)
In Mark’s blog posts he is also being open about how some non-geek groups are not immediately enthused about the product, and he is upfront about the need to address that. They are even sharing a breakdown of each cost the went into the development of the prototypes – down to the taxi receipts. What a refreshingly candid approach! And not just a pointless openness with owners and customers, but as consumers are increasingly able to choose from suppliers around the world, this transparency is a differentiator, and if they can keep it up, could serve as the basis for some strong customer relationships.
In being open about the problems they are facing, they can draw on a wider community of people to offer solutions and protect customers and investors from shocks if something bad happens.
Maintaining this level of openness might be a challenge – it seems so many deals are done behind the scenes relating to the value of contracts, access to technology and finance – some potential partners may simply (mistakenly) decide that ‘new’ is the same as ‘risky’. And as confidence has a significant role in how customers and investors deal with a company – what will happen the this level of transparency shows an imminent cash flow problem? What might happen if their balance sheet shows an excess of cash?
So if you though that Kickstarter was the place for new ideas to get backing, Light might be the project for you – but not for the product itself.
A couple of bits of Mark’s blog post were close to my own thoughts about why Light might not grab some people’s imagination. Sure, as a manifestation of the ‘internet of things’ Light is pretty cool – but even within geek circles, that is not a concept that is well known or understood. Most people will simply think its a novelty lamp (and while I’m at it, will it even be a good lamp for typical lamp tasks like reading, or will the LEDs be too dim?). So while the stated goal is to take a bite of the emerging ‘internet of things’ market, until this market exists, they are basically selling a novelty lamp. Finding ways of illustrating what the IoT is, and why it has value would be my #1 marketing job.
It does need more examples of how it beats a typical light, and why it is worth the cost. How about a cost comparison with a nice ‘designer’ lamp that is intended to fit with a modern decor? At the current $99 price, Light might look like a bargain.
To sum up – I’m excited about the new way of doing business they are demonstrating, but not yet sure about the product – but hey, I’ve got 51 days left to decide. And because I want to be a helper not a hater, I’m going try and think of some little things I can do to help them along. Starting with some use cases they can consider for demos.
Possible Use Cases
So Mark and team, here is my tiny contribution – if you can demo it, it will resonate with a particular group of people.
Toddler traffic light
For kids that are old enough to follow instructions, but too little to read or tell time. A traffic light system to keep them in their rooms until a decent hour of the morning would be great. Over night, maybe it would be red meaning ‘stay in your room’ at at 7:30 (or some other parent determined time) it would go green. Also, a red light glowing in the child’s room all night might be a bit scary, so perhaps the red light only turns on just before dawn.
The same system could be used for quiet time / nap time during the day to keep the child in their room for a period of time. Maybe it even has a subtle transition from red to green over the timer period so that the child has an indicator of how much time remains.
Skype Indicator / Bat signal
Light changes to a preset colour / pattern when a favourite friend comes online. Great for keeping in touch with friends on the other side of the world. Or great for teenagers who want to covertly send a signal to a group of friends who each have their own Light. When they all turn purple it means ‘google hangout happening now’.
So last weekend in Canberra I took part in the Global Service Jam event here in Canberra. Its the first time this event has happened in Canberra (that I know of) and its part of a global network of events that all take place on the same weekend.
I wasn’t too sure what to expect going into the event – and given that participants were asked to give up from Friday evening until Sunday afternoon, it was a bit of a gamble. I’ve been to a couple of 24 hour hack days before – and I think this was going for a similarly intense experience. In Canberra – people don’t tend to be that hardcore, and are always keen to go sleep in their own bed. No exception with this event – and perhaps even more so, as the mix of people was much broader than typically found at a hackday – not just geeks and programmers. This makes total sense when you consider that this was a hackday for ‘concepts’ not technology. The crowd of about 70 people was a mix of ages – with many of the older half of the group working professionally in design or innovation roles in their day jobs.
After an hour or two of opening activities and fooling around in the wonderful new building at University of Canberra where the event was held – we were presented with the theme of this year’s event via video from Germany. ‘Hidden Secret’ was the theme we were to use to build a service. Nice and broad, if you think about it for a couple of minutes. Its a theme that has lots of potential aspects to it.
As the crowd had split up into groups – we all started writing on the walls/whiteboards to extrapolate out what ‘hidden treasure’ meant to us, and what service you might design around these ideas.
Lots of people arrived at the idea that the ‘treasure’ might be knowledge or information. Most groups talked about services that would serve to collect or preserve this ‘treasure’. In particular, each group came to the idea that given Australia’s aging population there may be some service that could be provided to help capture the knowledge of older Australians – whether for family or business use.
Perhaps the user, having buried their precious item, could take a photo with their smart phone and upload it to our service, making a note of how long to hide the information, and who to notify when the time expired. Extra info could possibly be included such as the coordinates of the user when the photo was taken, and whether the user could extend the time before the reminder was sent. All uploaded information would be encrypted to keep it safe.
Now, the user could rest safely knowing that the ‘treasure’ would remain ‘hidden’ until the time of their choosing.
Of course, this is a very literal interpretation of ‘hidden treasure’, but I think that sometimes that is a nice approach to take at intensive events like this. It means you always have a clear link back to the theme of the event.
There are some cool other applications that come out of this – such as an ‘insurance policy’ like you see in the movies where the blackmailer says “Pay up – or else! And if anything happens to me, the photos will be sent to the newspapers” – where the hidden treasure is set up to send the note to the email address of some media outlets in 7 days, unless the user triggers the time extension functions.
Other possible uses include ‘Amazing race’ type games where ‘secret messages’ are sent out at pre defined times to help people complete a fun activity. Or scheduling messages from a big night out to go to your friends the next day.
But the core idea is very simple – and I think this lends it to a wide variety of applications. Its a platform rather than a message.
Our service was judged 2nd best on the day – but the event wasn’t really about competition. Just an exercise in thinking creatively, and working with a team to go from a rough concept to a prototype service. For a lot of people in the room, going from nothing to a prototype website in just a weekend is a pretty new and amazing experience – either because they are still in academia and yet to try it out, or because they are deep into corporate careers where these sorts of things take a long time to bear (even prototype) fruit.
Next time I go to one of these events, I’d love to see it organised along skill lines – with each team having a mix of skills such as a coder, graphic designer, user experience designer, marketer, subject matter expert etc. Most of the weekend could be spent in these diverse teams, but some of the time could be spent with these skilled people coming together to collaborate on their areas of common skill. That way, you might get the expertise of 4-5 people on one aspect – and its a great way to develop your specialised skill while also working in a multidisciplinary team.
I was pretty determined to have something working by the end of the weekend – whether it was a website or some other ‘finished product’. Having recently read some good articles on creative types needing time to themselves to do good work, I took advantage of the great building we were in to find quiet places to work for chunks of time – coming back to check in with the team along the way. I also did a couple of hours work late on Saturday night while I was at home.
A lot of time was a wasted while I tried to work out why I couldn’t ssh to the Amazon cloud server I had created, or to the Ninefold virtual server. Its been a while since I played with these services, so I wasn’t sure if I was doing it wrong. But when I returned on Sunday I had some other people test for me – and it turns out that port 22 is blocked on their network! Boo.
Once I switched to tethering my mobile to the laptop, it was all good and I was able to ssh into the servers to get things running.
I used a basic debian image on Amazon AWS and just did a basic “apt-get install apache2 php5 mysql-server” to get a basic LAMP server up and running. Once the IP address was assigned, and the ports forwarded via the Amazon console I was up and running.
I also spent some of my time getting better aquainted with Github which I used as a repository for the code that was written. The basics of the HTML come from HTML5Boiler plate – which provides a robust starting point for developing mobile friendly web sites.
Once all the server and system guff was sorted, it was pretty quick to get the prototype up and running. The prototype writes secrets to a database, but was yet to allow file uploads or send emails.
If you check the About page on the Ta-Dah website, you’ll see I listed a number of a issues that would need to be addresses before the idea was likely to be viable. Putting aside how you could collected revenue – encryption is likely to be a big issue. How do you keep ecrypted info like this for long periods of time? 50+ years?
That could have been a whole service design exercise on its own..