29 July 2014

Automated Task creation from Evernote Checklist

 

This tweet intrigued me this morning. Thanks @colwar

By this afternoon I had automated tasks being added to Outlook based on checklists in Evernote notes. And still managed to fit meetings and other work in between. I didn’t worry about the text expansion aspects described at the post as my tablet/phone apps for Evernote have easy to insert checkboxes anyway.

My steps…

  1. Using this post from DEG Consulting via @tabletproductiv as a guide I signed up for TaskClone
  2. In TaskClone I set Task App to Outlook and Task App Destination Email to my work email address. Along the way I tested sending these to a Trello board and it worked great. I just don’t want to use Trello for work reminders.
  3. I set my TaskClone:Evernote corresponding tag to #todo so that TaskClone knows which Evernote notes it should monitor. Each checklist item gets its own task created so they can be managed separately. This image shows the note after it has been handled by TaskClone which sets |TC| after each checkbox so that if the note is updated in the future and new checklist items are added it knows not to duplicate existing tasks.
    image
  4. In Outlook I had to set up a rule that runs a script based on this advice. I may need to tweak the script (setting different due dates) & rule (deleting the email) but I now have tasks being created based on emails received.

image

Downside: This script will only run in Outlook client (not on Exchange server), so the tasks will not be syncronised to my phone/tablet until I’ve logged in at work. Using other compatible task managers would avoid this. For example with Trello the items appeared immediately as tasks in the app.

22 July 2014

Testing My Scheduler





11 June 2014

Applying 10 rules for care & feeding of scientific data

Based on the rules in this slide posted by @flexnib on Twitter

image

The rules come from:

10 Simple Rules for the Care and Feeding of Scientific Data by

Alyssa Goodman, Alberto Pepe, Alexander W. Blocker, Christine L. Borgman, Kyle Cranmer, Mercè Crosas, Rosanne Di Stefano, Yolanda Gil, Paul Groth, Margaret Hedstrom, David W. Hogg, Vinay Kashyap, Ashish Mahabal, Aneta Siemiginowska, Aleksandra Slavkovic

(http://arxiv.org/pdf/1401.2134v1.pdf)

… here is how we applied them to our data from our research into the use of Instagram by libraries for VALA 2014.


The perfect storm: The convergence of social, mobile and photo technologies in libraries (data set)
Wendy Abbott,  Jessie Donaghey,  Joanna Hare,  Peta J. Hopkins.

Date range : 2013
Paper presented at VALA 2014, February 3-5, Melbourne, Vic.
200102 (Communication Technologies and Digital Media Studies); 080709 (Social and Community Informatics)
Creative Commons Licence



  1. Love your data, and help others love it too. Advice here is to cherish, document and publish your data.
    We put some effort into compiling the data into consumable file types, we published it online and we mentioned it in our presentation (video available) at the conference along with the URL. We also blogged about the paper and data.
  2. Share your data online, with a permanent identifier.
    We posted our data to our institutional repository. We don’t have a DOI, but it is an archive with long-lasting URLs and provides metadata to make datasets findable.
  3. Conduct science with a particular level of reuse in mind.
    We planned for our data to be inspectable, and if a curious mind wanted to do something creative, or extend it then that was a bonus. Our paper and the presentation describes the methods that were used in compiling it albeit at a high level. In addition the survey instruments used were included in the data set.
  4. Publish workflow as context.
    I’m going to have to check how well we recorded this and made it available with the data set. It included some basic modifications to the raw data from the 3rd party monitoring tool we used as the first set of data from them varied slightly in headings to the 2nd set due to Instagram making some changes to their service eg. they implemented video sharing. We also made some minor modifications to make sure that the country information was comprehensive. We output some of this data to csv and uploaded it to create a Google map. While we covered our methods at a high-level in the paper and presentation, I suspect that we could have done better with this rule when it came to publishing the dataset. Ah, well – there’s always room for improvement.
  5. Link your data to your publications as often as possible
    Our slides (prezi) includes the URLs of both the paper and the dataset, and in our presentation we mentioned the paper and the dataset. However, on inspection we neglected to add links between the paper and dataset in the institutional repository. So that’s on my to-do list.
  6. Publish your code (even the small bits).
    We didn’t write any code – we made use of a 3rd party product to gather public data from instagram accounts.
  7. Say how you want to get credit.
    We published our data (and paper) under a creative commons licence. This is encoded in the dataset elements.
  8. Foster and use data repositories
    As librarians in an academic libraries we support and promote the use of our institutional repository e-publications@bond. Our Scholarly Publications & Copyright Team provide research data management support to our University community including upload of metadata to Research Data Australia.
  9. Reward colleagues who share their data properly
    Tell them how you have “loved and fed” your research data and librarians can help to raise its profile through research repositories, inclusion in open access collections and recommendations to those who Ask-A-Librarian for help finding information. We undertake to always credit the sources of data that we use in accordance with best practices.
  10. Be a booster for data science
    Well, I’m writing this post to demonstrate that it is not that hard to apply these rules in cases of simple data. The more complex the data, then the more time is needed in sorting out the data management plan and implementing it. Many academic libraries are ready and available to provide advice in research data management from the planning to the publishing stage.

9 June 2014

Link Highlights (weekly)

  • OpenRefine (formerly Google Refine) is a powerful tool for working with messy data: cleaning it; transforming it from one format into another; extending it with web services; and linking it to databases like Freebase.

    tags: data cleaning refine opendata research tools inn0vate

Posted from Diigo. The rest of my favorite links are here.

3 June 2014

Missing RSS in our discovery layer

At MPOW we recently upgraded to the latest version of our discovery layer. There were a few odd things to deal with (you might have seen my recent post on the search box) but on the whole it looks fresher and I’m not aware of any grumbles from our customers.

That said, I was using a feature of the previous version that has not been continued in the latest version.

RSS feeds for search results. Here are some of the things I’d like to use it for.

  1. I would like to use RSS feeds to publicise new books in our collection in various channels.
  2. Previously we were doing this on our website using a Feedburner widget. Since the upgrade I have had to replace it with a search feed from Trove. The search is limited to items held at our library. The problem with this is that our customers clicking through to a title they like now get taken to Trove, and have to navigate through to our catalogue instead of directly to the record in Summon. I’m hesitant to put this in too many channels or give it higher visibility given the disjointed experience for customers.
  3. We had implemented a couple of RSS feeds in our Library Guides as well. These are now broken and the boxes have had to be deleted. These feeds were topic based eg. new journal articles on “blended learning”. Lost opportunity to drive traffic back to the discovery layer and the rest of the collection.
  4. I have no idea how many (if any) of our customers had set up search alerts using the RSS feeds in Summon 1.0. Many people do not check their feeds every day, not every feed would update with a new item frequently either, so it may be quite some time before they notice that the feed is broken. However, we once provided advice on how to do this using Summon in a Library Guide for those wishing to keep current in their discipline.
  5. Personal use – I had a search feed set up to keep track of new DVDs being added to our collection for weekend viewing. And when working on a research project on the use of photo-sharing in libraries I had a search feed set up to track any new literature that we had not yet found. This latter example I can still do using Trove and it will actually be better as I wouldn’t wish to limit it to MPOW collection. It should be broader. But the first example – I only want to know about the video if it’s on our shelves.

    image
  6. Some time ago I sent in a suggestion to the vendor for a new books carousel drawing data from Summon (a bit like this) that could be plugged into websites. My suggestion was duly sent off to the developers for consideration, but clearly other development work was of prime importance when  planning a major interface redesign. Nevertheless – using RSS is a possibility for providing the underlying data to power such a widget or plugin. I wanted it to display book covers if they were available, and to be customisable to choose topics that would complement our communications and events planning. Of course there are other methods for doing this and they may come up with something fabulous.

My guess is that this feature was not used by many people, but some like me will miss it.