Crowdsourcing

From CURATEcamp
Revision as of 05:03, 12 May 2012 by Patrick.etienne (talk | contribs) (Discussions revolving around the crowd-sourcing of metadata and tag/label schemes.)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

1) Who's using user-added metadata?

  - Successes and Failures
  - Importing (Ingesting) or Exporting

2) What types of metadata fields are good candidates for crowd-sourced metadata? 3) What effective incentives can be provided for metadata entry? 4) Mechanical Turk

  - Used by Amazon
  - Looks computer generated, human created

5) What possible legal issues might there be with crowd-sourced metadata? 6) What quality control or authority control systems can be implemented?

  - What reputation systems might be employed to handle quality / authority issues?

7) What methods of integration could their be with non-user generated metadata? 8) Controlled Vocabularies

  - There must be some consideration of domain-specific vocabularies
  - One size does not fit all

9) Awareness - Users must be aware of crowd-sourcing features

  - Marketing
  - Advertising
  - Public Relations