web analytics

— urbantick

Archive
Tag "gadget"

Paper is dead and everything is digital these days. Not quite, it was the big promise twenty years ago, but we are still nostalgic about the use of paper. Having something physical in the context of all this fast changing content and information battle.

Into the mists of the storm comes a promis of a little gadget that potentially bridges the gap and links the digital online world with the physicality of your paper. Its tiny and could soon be your best friend. Berg presents the Little Printer.

Berg Little Printer
Image taken from Engadget / This little box is your new best friend and keeps you uptodate with the lastest buzz. Little Printer by Berg.

Its a small printer, printing receipt size role paper with anything you feed it via a mobile phone or a Wi-Fi connection, via Wired. Berg has started to implement some news sources as well as enabled RSS feeds. It is even possible to send messages to other printers directly.

Berg Little Printer News Summary
Image taken from Berg / You can print a summary of the latest news are for example from Foursquare.

Berg have earlier started to look into the topic of information feeds and ways to integrate the feeds poeple consume on their small mobile phone screens with oder media. There were ideas integrating billboards, train table messaging boards, train tickets and so on. See a summary HERE. With this new gadget that is announced to be available in 2012 Berg introduce a whole family of possible gadgets summarised as the Berg-Cloud. This is good marketing!

Read More

TimeLapse has been sort of a niche thing with few geeks loving it and producing seductive clips. However these days seem now over and TimeLapse is becoming widely used and much more sophisticated. Different techniques and ways and subjects have been explored and developed, but now technology is moving in. The big shot cameras are getting more widely used and the built in features bring TimeLapse up to speed. However not only in the high end market but across the boards equipment, including post processing is easier accessible and simpler to handle wich makes the difference.

Now also the gadget are becoming more widely used and here we have two examples showing of the potential of motion controle equipment. The shots are very fascinating and truly take this genre to a next level.

The two videos are using different brands of motion controle equipment. The first one is testing a beta Stage Zero Motorized Timelapse Dolly developed by dynamicperception. and the second one is using a drive cam developed by dotgear. However both are still in development, but should be available for pre order.

Read More

It is really something that is the aesthetic of the time. Thin black endless wiggly lines on an abstract white background, densifying here, loosening up here only to cuddle in an other heap of completely tangled up strings over her. These abstract patterns are visually fascinating, but why this is, I am not sure. One thing is the abstraction from an obvious continuos activity of some sort, the presence of an invisible repetition, of which one is sure must be there and the forming pattern of density and mess.
We have, over the last two, three years learned to recognise these sort of drawings as movement line. Movement of people and animals perhaps, but movement lines quite different from other movement paths previously visualised such as the path of the sun or planets, the movement of shadows or water. It contains the aspects of immediate and real-time decision on the spot, the reaction to a range of influences from large scale, distant events, to the immediate surrounding and interactions with other static or moving objects. It represents in this sense a process as a string of events that were actively dealt with. This aspect of process or in this context better ‘creation’ – in the sense of creating as you go along, of individual actions influenced by background, experience and personality – is a unique characteristic that usually is either underestimated and erase-simplified or over estimated by putting it as random. What exactly is its role in a denser aggregated context?

IOGraphica_1.9hours_100607
Image by urbanTick / Movement tracking over the period of 1.9 hours working in the evening on some posts and mapping tasks. The activity is captured as curser positions using the software IOGraphica.

The visualisation here, come very close to what has been described above, but actually it does not represent any physical movement, it is a simple track map of the curser activity on the computer screen. There are similarities, however the context is extremely confined and designed to work and relate in a specific way. Nevertheless it produces visually interesting images. And if your bored and dont have time to go for a walk, a stole and drift thought the city, let you mouse curser do it for you. The too is called IOGraphica was deveoped by Anatoly Zenkov and Andrey Shipilov and is currently available in v0.9. A tool to run in the background and track you workday at the desk. One started it records each location of the curser as well as the duration, draws lines between them and upon request visualises the time spent per location as growing black dots. Only a few options available but nicely presented.
Download from HERE. See some more visuals on flickr.

Thanks to Paul M. fo the link

Read More

Tales of Things, the new service to link digital memories and physical objects has gone online recently. It was covered widely in the media, from the New Scientist, to WIRED and the Guardian, as well as of course on urbanTick HERE and HERE. The internet of things has come to life. It is now in your pocket on your iPhone and ready to interact 24/7. How and why this is happening now with this new project out of the ToTeM labs is the question put at the initiators. In this interview Ralph Barthel, from the developer team behind the service, explains the context and the details of this project.

urbanTick: Tell us something about your background and your role in the project and of course tell us about your most precious tale!

Ralph: My research and work background is in the areas of social computing, design research and new media system development with specific applications for learning and knowledge building. In this first phase of the project I was responsible for the development of the backend web application of the Tales of Things service and some aspects of the Interaction Design. In the next few months I will start to explore additional interactions and novel user interfaces to engage with the Tales of Things service.
My first tale on Tales of Things was about an old audio tape recorder (Grundig TK 23) that my grandfather owned. It was built in 1963 and is extremely heavy by today’s standards. Interacting with this thing brings back joyful memories from my youth.

Grundig TK 23 Advertisement
Image taken from TalesOfTings website / The Grundig TK23 documentation from the 60’s. Find out more about the Grandfather tale on TalesOfThings.

urbanTick: Can you describe the development process of this project.

Ralph: In October 2009 Andy Hudson-Smith, the project leader here at CASA, brought Martin De Jode, Benjamin Blundell and me together to work on the TOTeM (Tales of Things and Electronic Memory) project. The TOTeM project is funded through a £1.39 million research grant from the EPSRC to explore social memory in the emerging culture of the Internet of Things. Five universities in the UK (Edinburgh College of Art, University College London, Brunel University, The University of Salford and The University of Dundee) are collaborating in this project. The scope of our initial work up to the launch in April 2010 was very much predetermined and detailed by the TOTeM project plan. Consequently we soon started building and evaluating prototypes of our web application and mobile clients with the aim to refine them through formative evaluation with project partners, advisors and selected user groups. In the next phase of this project the Tales of Things service will enable us and our partner institutions to study the relationship of personal memories and old objects when mediated through tagging technologies.

urbanTick: Technical difficulties and special solutions?

Ralph: From a technical point of view the main difficulty in an applied project like TOTeM is to leverage the capabilities of broadly available tagging and ubiquitous computing technologies while making them accessible for a large number of people. In this context it is important to go beyond the step of providing a proof of concept (which is the purpose of many research projects) but to create a sustainable and maintainable technological infrastructure for years to come. Within the constraints of a research project with a small technical core team it can be difficult to balance innovation with providing basic support services. This tension cannot readily be resolved and in the next few months also depending on the uptake of the service we will see how this will develop.

urbanTick: In this sense Tales of Things is not a pure research project. What are the aims and who are you working together for the development and for the application (service)?

Ralph: The core development team does currently all development work and hosting in-house. Our project partners in Salford are exploring the possibilities of commercialisation. We are planning to collaborate with libraries and museums and to be present Tales of Things technologies at events and festivals. TOTeM will for example be in May in Manchester at the Future Evertything Festival.

urbanTick: Describe the basic steps to take part in the tales of things project.

Ralph: To start people can go to www.talesofthings.com and browse around and have a look at some of the tales that have been already added. They can register on the site for a free account and can download the iPhone application that reads Tales of Things QR Codes and enables people to create new tales when they interact with a tagged object. After loging in to our web services people can create a new things. To do this they would typically provide some information about the thing such as description and title and a photo of the object if available. In the process of creating a thing they will also be asked to provide a first tale for the thing they are adding. People can then generate and print the QR Codes of their things and comment on other peoples tales of things. The website provides further map views that display where in the world the tales have been created.

urbanTick: The tale is refering to the memory someone has of a thing. As we all know these memories are variable and can be difficult to pin down. Can you describe the strategy you developed to can ephemeral thoughts, what does a tale consist of?

Ralph: A tale starts with a brief textual description and a title of the tale. References to any addressable media for example from services like YouTube, Flickr, Audioboo can be added to a tale. Currently files from the three mentioned services are displayed in an integrated media player interface. All other URL’s are linked as additional resources. Finally a geolocation can be added to a tale.

Banksy'sMaid_talesofthings
Image taken from TalesOfThings / The tale of the Banksy maid in Camden, long gone but still here.

urbanTick: The project has only launched two weeks back on the 17 of April. How was it received and how will you develop the platform in the coming weeks?

Ralph: It was receiving quiet a bit of media coverage for example in the Guardian Technology blog or BBC Radio 4. The media feedback was largely positive. There were also some critical voices that doubt that people will socialize around tagged objects. Obviously this is something that time will tell. The media coverage brought some attention to the project and many people visited the website and several hundred already signed up for user accounts.
At this stage we will closely follow how people engage with the Tales of Things service. At this point we are looking for different uses and the values and meaning that people assign to Tales of Things in several pilot studies with different communities. The results from this piloting stage will inform further development efforts. We also aim to support additional mobile platforms such as Android and to develop an API so that other services can connect to Tales of Things.

urbanTick: There are a number of specific terms frequently used to describe aspects of this project. Some are borrowed, some are newly defined and other are everyday words. Can you explain the “thing”, the “tale” and the “tag”?

Ralph: A thing refers to any object (e.g. industrial objects, tools, architecture) people would like to link an individual memory to. A tale is story of a personal memory that someone associates with this thing. A tale is told on the platform using different digital media (text, video, images, audio). Video, Image and Audio media can be taken from the web and users can create textual content through our web service. Consequently people can link any addressable digital media file in the creative storytelling process. The thing and the tale(s) are then linked via the tag. This is a unique identifier in the form of an QR Code. This tag is machine readable and can be attached to the thing. The Tales of Things service generates QR Codes for each thing automatically. We also have the option to use RFID identifiers to mark an object. This emerging technology is known for example from the Oystercards. We are curently developing an Tales of Things RFID reader to further explore the possibilities of this technology. For now any existing RFID tags can be linked to the things in our database.

urbanTick: The project could be classified as being another social networking site. Is it, and if so what is different, or how would you characterize it instead?

Ralph: In the concept of Tales of Things the physical interaction with tagged objects is important. People can only add new tales about things if they physically interact with an object through reading its tag. Certain permissions can only be shared and passed along through the interaction with the object which changes the configuration of the server. While people can view tales of things on our website they can only add new tales when interacting with the tags. Consequently the website, that has elements of social networking sites, is only a part of the entire service experience of Tales of Things. The project aims broader to explore implications of a service space in which enabled through ubiquitous forms of computing physical world and cyberspace are interlinked. The project is interdisciplinary so that the research inquiry includes aspects of Human-Computer Interaction, Art Practise, Anthropology and Commerce.

worldofthings_talesofthings
Image taken from talesofthings.com / The World of Things, map on the project site showing the location of the objects and tales. It is also possible to track objects as it loggs each location it was scanned.

urbanTick: Potential of the internet of things?

Ralph: There is a certain anticipation that the Internet of Things will eventually lead to a technical and cultural shift as societies orient towards ubiquitous forms of computing. The development of technology and practises are often co-evolving so that it is important to understand possible implications. Internet Of Things applications can be complex services that evolve in space and time. The experience of using an Internet of Things service spans several user interfaces and the design space encompasses physical artifacts in the real world as well as conceptual artifacts. Personally I am interested in exploring human-computer interaction (HCI) in this design space as it poses specific methodological, ethical and philosophical challenges that need to be addressed when design IoT applications.

urbanTick: The Internet of Things is not new, why do you think it is emerging just now again?

Ralph: The idea of tagging of things and networked objects is by no means new. What has changed in recent years is that enabling technologies such as internet-enabled smartphones have become more affordable, usable and widespread. More and more people carry powerful small computational devices with them. This has led recently to a renaissance of Internet of Things applications used in a non-industrial context which can be witnessed by services like Foursquare or Pachube.

urbanTick: Critical mass for the internet of things to enter as a important player?

Ralph: Internet of Things applications are already important and wide-spread in many industries such a logistics. The TOTeM project is concerned with a different application of the Internet of Things outside industry practise. I can’t say what the critical mass for our project is. The critical mass is not necessarily the most important aspect of the project. It might very well be that the technologies that are developed as part of this project have the potential to add value to the social practises of specific communities. Such findings would be equally important. Tales of Things is after all a research project albeit an applied one.

urbanTick: What is your vision for this project?

Ralph: The partners in TOTeM are from five universities and have different backgrounds and might therefore have different visions. From a research perspective I am mainly interested in studying and exploring the Internet of Things as hybrid interaction design space and how IoT applications can be used for learning and knowledge building in everyday activities. From a long-term perspective it would be great to see a sustained engagement of many people with the Tales of Things service.

Read More

Prezi the tool you have been dreaming of ever since you were forced to use powerpoint for the first time. Finally these dreadful times are over! Prezi is here and it work! There is little to be said just head over to Prezi and start using it. Alternatively you stay here and test it right below with the embedded presentation. Use the arrow to click through, once you feel comfortable you can freely interact and drag the canvas with the mouse or zoom in and out with the moue weel or trackpad gesture. Note, I had problems loading this particular prezi with my chrome browser. I am having an issue with flash, safari and firefox should be safe.

.prezi-player { width: 580px; } .prezi-player-links { text-align: center; }

However if you are still here, or you have come back, here is some more information on what it is and what it ca do.
It is a flash based application that will allow you to present content in a non linear way. You re working on a single infinite canvas, on which you can arrange content. Double click anywhere to write text, add images or shapes. It works all intuitively and is graphically stylish from the beginning.
The structuring is archived foremost by panning the canvas and the zooming in or out on content. These simple gestures make for extremely powerful tools and together with the stylish camera animation the result is astonishing.
The zooming is applied already while populating the canvas. By zooming in, performing gestures know from digital map navigation, a certain hierarchy is established. Text size for example will be directly adjusted to the current zoom level. There are virtually no limitations to the zoom level. This in it selve is already an extreme feature that can be used to surprise the audience. Details can be unveiled while the show progresse, elements that were only burred lines will suddenly be the important points. This will definitely engage the audience.
The panning or sequencial camera movement is applied by a separat tool as a path of numbered dots. O course they will be invisible in presentation mode. The panning is not restricted to horizontal and vertical movement, ut can aso be rotations and with a cleaver integration this too will definitely engage.
For me the online version is definitely more interesting. Having a fast internet connection helps. The very first question then is what kind of web content can be integrated. Here the functions are limited, but the most important feature works, videos can be embedded vi youtube. Simple put the link as text on the canvas. The clip will be shown just there, however an internet connection is required.
Presentations can be either put together on line in your web browser or you can download a desktop client. The final product can either be presented online or be downloaded and be shown locally. To present locally you don’t need any extra software this will be integrated.
This is a really cool tool, with the downside of the price tag. There is a free version that will have a prezi watermark on it, next up will set you back $60 or then $160 per year. If you have to give a few presentation a year and you want to spice them up while having fun this product is definitely worth it! There is also an academic license available, a great option.
So far I have been using Google docs to do my presentations, mainly because it was a simple solution to have the content I wanted to show online. I have to say, that I actually dont like it, the graphics and the interface are just horrible and the options are very limited, which in it selve is not a bad thing. But if elements can not be arranged or scaled properly, combined with limited font and colour options, it becomes extremely difficult to create a nice sheet. Also the presentation options were constantly limited to the browser window and the biggest thing on the screen was usually the Google logo. Prezi is a lot slicker and offers stile out of the box. The potential of the nonlinear structure and the power of the zoom are a revelation. For me this is definitely one of the softwares of the year! And it comes with a cool Twitter support.

Thanks for the link go to urbagram

Read More

I will be giving a talk today showing investigations on a city level using digital footprint data. I was invited to talk at the MSc for Adaptive Architecture and Computation at the Bartlett School of Architecture. ‘Digital Footprints / Tracing Bodies Through Narratives of the Everyday’ will be looking at temporal aspects of citizens trails, data collection and visualisation.
New material processed from our Twitter project will feature as an example.
The talk will focus on digital aspects of the data, but still I would like to draw the connections to the physical aspects, as well as very importantly to the people themselves.
After the talk the students will be presenting some of the work they are developing at the moment and have a discussion.


Imge by UrbanTick / Story board One.

Read More

Tracking movement of individuals in the urban environment is one of the elements of the UrbanDiary project run by urbanTick. However we re here interested in any sort of tracking and this ranges from tracking animals to climate change and planets. For the UD project GPS technology is used and this works fine. However it would not work indoors and as one of the first participants quickly pointed out, we actually spend quite a lot of time indoors. Take a normal working day and your likely to spend a bout three hours commuting an the rest your indoors, office, shop, restaurant or church.
At the same time however, you’re not likely to move very far inside. From the desk to the coffee machine or the printer and maybe from one floor to another. Nevertheless it can be quite a lot of movement over the day, depending on the job and the task.
So indoor tracking might be of some interest. And it actually exists as a commercial branch. It is of special interest to commercial and retail operators, like shopping malls for example. We featured a product HERE, that was based on mobile phone signals.
However the company timeDomain offers a range of products offering a similar tracking service. TD provides tag based tracking products, but also tracking without tags. This tag-less product is demonstrated in a video HERE and it seems to work stunningly well, even with a number of subjects in the same perimeter. Tag based products can be used in a number of settings and are mainly promoted for retail. Here trolleys or even individual goods, such as cloths can be tagged. Flash demo HERE, and a video demo HERE.

timeDomainPlus.HJKymEEQPMfZ.jpg
Image by timeDomain – illustrating usage of the Plus

Read More

The rise of location information brought us knowledge of where we are ad beyond. Today you’re not only told were you are but also what is around you, how it looks like, how far it is and in which direction. Almost assuming that you are not actually there. This is usually also the selling point. If you can’t find it for example or your still too far away this will give you guidance. However it also demands in-depth engagement of the end user. This is probably the point where all these services have trouble penetrating the everyday.
However, it is still fascinating and if you are into mapping and interested in what happens around you sooner or later aspects of time will start bothering you. Most of the apps feeding your ‘location awareness’ are actually static. They relate to one point in time or assume a permanence.
This is now being addressed with a number of emerging apps, including augmented reality like layar. But also in the area of the actual map information there is a rising wealth of information regarding past location information as in the form of old aerial photos or historic maps. Google has introduced the timeline feature in Google Earth earlier this year with the version 5.0, where you have the ability to access old aerial photos used since the launch of the Google Earth service in 2005. Now it has also swapped to the mobile market and apps for the iPhone are available. On this blog earlier featured the great app Historic Earth which has a huge database of old digital maps from the mother company Historic Map Works. Now the Edinburgh College of Art has developed a new web based mapping service called ‘Walking Through Time’ that is also available for mobile gadgets, such as the android and the iPhone. It looks really promising, with the developers saying: “…our user group is interested in walking through real space whilst following a map from 200 years ago (for example) and being able to tag and attach links to the map that offer historical and contextual information”. Tagging and linking? that is something we are interested, sounds great!
See teaser below.

found via digitalUrban

Read More

TimeLapse has extensively featured here before and I am always interested to hear about new projects in stop motion. One of the aspects of time laps is the ‘compression’ of time as opposed to the ‘real time’ video recording at 25/30 frames a second. TimeLapse can be any frame rate per second, minute or year. In post processing the images are then output as a clip at the video frame rate. This then is a video with dropped frames, skipping sections, but thus compressing an event in to a much smaller timeframe.
There are brilliant examples of yearlong projects, capturing the change of the seasons.
For a lifelong project a couple of difficulties have to be overcome. One is the readiness of the photographer. For not to miss the opportunity to get the good shot, one has to be constantly on the trigger. This is not possible over a longer period while ‘living’ the lead role in the soap opera project. An other implication is the storage capacity, even though it is a compressed version of filming it generates quickly a lot of data.
A new product is about to enter the market to take on exactly the customers that are interested in that kind of stuff, somehow that would consequently include me too. However, the product was initially developed by Microsoft in one of the research centers, actually in the Cambridge research centre. In short it is a camera that can be worn as a bracelet and it takes, as the name suggests, triggered by a bunch of sensors images. These sensors are light-intensity and light-color sensors, a passive infrared (body heat) detector, a temperature sensor, and a multiple-axis accelerometer. The camera processor controls the sensors and will if there is a change in sensed environment take a picture. Every thing is automatic, hands free photography so to say. Cleverly the developers got rid of the viewfinder, to save on unnecessary elements and probably to stop customers using the device as a normal camera. Whether the device has an actually release button to manually shoot an important scene is not reported.

senseCam01.r3o9yAqHvbt4.jpg
Images by Microsoft – Example shots taken with a SenseCam

Reading the specs does not necessarily make you jump for joy, the cam spots a VGA 640×480 pixels resolution receiver. I am not a big fan of massive resolution, but having at least the option for a timeLapse on vimeo in HD should probably be standard. That’s only some 1080×720 pix what a first generation iPhone will do! But it goes on, the camera is capable of taking a picture every 30 secs only and there is currently only a 1GB flash memory available. Microsoft suggests this will give you room for 30’000 pictures.
As a life log this is, as gizmodo points out, only a record of 10 days at a 30 sec rate, not exactly a lifetime. Again there is currently no data regarding the power supply available but this is likely to have additional implications. It is unlikely that the cam will manage 10-day session.
Microsoft has now licensed the product to Vicon, based in Oxford, UK, a specialist for motion capturing. The reason named for this move is demand. So far some 500 devises have been produces. The new producer is prepared to launch the product in the next few months according to the New Scientist. But at a price of £500.00, not cheap eh, you might think now, me too.

However the blogging community has taken this announcement to test a few funny slogans. They came up with a couple of funny titles for the device: SenseCam – the Black Box Flight Recorder for human beings, by gizmag.com, ‘Black box’ cam for total recall, by the BBC.

senseCam02.zVH4frfgp90u.jpg
Images by Microsoft/Vicon – the SenseCam

And here an example of the cam in use.

Read More

TomTom announced its navigation software for the iPhone earlier this year at the WWDC. It was a blog post and it also was somehow exciting. It is only two and a half month later and the software is published but it is all not that exciting anymore. It might be a great software and with no doubt it works fine, but since the introduction of the 3GS at the same WWDC, so much has changed on the mobile gadget market. Only this month the introduction of the crowd sourced traffic platform WAZE was introduced in the United States and layar opened up AR layers for a broad range of uses. In fact AR has been the big topic for mobile phone platforms and Android is leading as an AR platform at the moment. TomTom has not yet announced anything for the Android platform.
Anyway, one software can not do everything we are well aware of this, but this now pushed the iPhone with its “can not run anything in the background” policy to its limit. If I ever will use the TomTom on my iPhone I want to have the WAZE live traffic update on top of it to give me up to date information and why not having some user generated stuff as AR blobs on the screen as well. For me all this fits together and will hopefully eventually merge into something I would more likely call a “navigation” software.

Augmented Reality Navigation from Robert Winters on Vimeo.

So navigation in the style of AR would be exciting, but the ever so normal (we now definitely got used to it) “after 200m turn right” TomTom is not exciting anymore, Nevertheless here is the latest TomTom clip to sweeten the waiting for the actual iPhone car kit.

The company has not yet announced the release date for this important element of in car navigation. In fact this is really funny but theoretically the software is somehow useless without the car kit. Of course some clever guys came up with a solution.

Found through GPSobsessed

Read More