[go: nahoru, domu]

Page MenuHomePhabricator

[EPIC] Image Recommendations Android MVP
Closed, ResolvedPublic

Description

Objective
The Android, Structured Data, and Growth teams aim to offer Image Recommendations as a “structured task”. More about the motivations for pursuing this project can be found in the 4. IR_Planning & Spec Document. In order to roll out Image Recommendations and have the output of the task show up on wiki, a MVP needs to be created to enhance the algorithm provided by the research team and answer questions about the behavior of our users to determine the potential success and needed improvements of this project.

Product Requirements
As a first step in the implementation of this project, the Android team will develop a MVP with the purpose of:

  1. Improving the Image Matching Algorithm developed by the research team by answering "how accurate is the algorithm"? We want to set confidence levels for the sources in the algorithm -- to be able to say that suggestions from Wikidata are X% accurate, from Commons categories are Y% accurate, and other Wikipedias are Z% accurate
  2. Learn about our users by evaluating:
    • The stickiness of Image Recommendations across editing tenure, Commons familiarity, and language
    • The difficulty of Image Recommendations as a task and if certain matches are harder than others
    • Learn the implications of language preference on the ability to complete of the task
    • Accuracy levels of users judging the matches because we’re not sure how accurate the users are, we want to receive multiple ratings on each image match (i.e. “voting”).
    • The optimal design and user workflow to encourage accurate matches and task retention
    • What measures need to be in place to discourage bad matches

Details can be found in:
MVP Specs Doc

Product Decisions

  • We will have one suggested image per article instead of multiple images
  • This iteration of the MVP will not include Image Captions
  • There are no language constraints for this task. As long as there is an article available in the language we will surface it. We want to be deliberate in ensuring this task is completed by a variety of languages. For this MVP to be considered a success, we want the task completed in at least five different languages including English, an indic language and Latin language.
  • We will have a check point two weeks after the launch of the feature to check if the feature is working properly and if modifications need to be made in order to ensure we are getting the answers to our core questions. The check point is not intended to introduce scope creep.
  • We aren't able to filter by categories in this iteration of the MVP, but it could be a possibility in the future through the CPT API
  • We will Surface a survey each time a user says no and sparingly surface a survey when a user clicks Not Sure or Skip
  • We need three annotations from 3000 different users on 3000 different matches. By having these three annotations, the tasks will self grade.
  • We will know people like the task if they return to complete it on three distinct dates we will compare frequency of return by date across user type to understand if there was more stickiness for this task by how experienced a user is
  • Once we pull the data we will be able to compare the habits of English vs. Non English users. We can not / do not need to show the same image to both non English and English users. Non English users will have different articles and images. We will know if a task was hard due to language based on their response to the survey if they click no or not sure. We will check task retention to see how popular the task is by language.
  • In order to know if the task is easy or hard, we would like to be able to see how long it is taking them to complete it. ****NOTE: This only works if we can see if someone backgrounds the app. Of the people that got it right, how long did it take them?
  • In order to know if the task is easy or hard we should also track if they click to see more information about the task, in order to make a decision
  • We determined that it is not worth adding extra clicks to see what metadata is used that is found helpful. Perhaps we allow people to swipe up for more information and it generally provides the meta data??? Will need to see designs to compare this
  • It is too hard, at least for this MVP, to track if experienced users use this tool to add images to articles manually without using the tool, so we aren't going to track that.
  • In the designs we want to track if someone skips or press no on an image because the image is offensive in order to learn how often NSFW or offensive material appears

[UPDATED]
Based on the usertesting of the prototype (see T277861) the following changes need to be made, ranked by priority order:

Required

  • T281112 Suppress 'Add to Watchlist' tooltip when accessing from 'Train image algorithm' SE task
  • T278528 Optimize counter element for different themes
  • T280151 Create designs for small screens as alternative to draggable bar #needsDevelopment #needsDesign #lowFeasibility
  • T278528 The element of positive reinforcement/counter has displays in the dark/black theme and needs to be optimized. #needsDesign #highFeasibility
  • T275613 Write FAQ page #needsPM #highFeasibility
  • T279454 Distribute image recommendation suggestions in a predictable manner to users
  • T280424 Revisit onboarding copy to incorporate dragable bottom sheet
  • T278490 Optimize tooltip positioning and handling on smaller screens, as they are cut off on smaller screens. #needsDevelopment #mediumFeasibility
  • T278493 Ensure words are not cut off and gracefully overflows #needsDevelopment #highFeasibility
  • T278526 Create more suitable 'Train image algorithm' onboarding illustrations for all different themes. #needsDesign #highFeasibility
  • T278527 The checkbox items in the the 'No' and 'Not sure' dialogs have issues in dark/black theme and need to be optimized. #needsDesign #highFeasibility
  • T278529 Provide an easy way to access the entire article from the feed, e.g. by incorporating a 'Read more' link, tappable article title or showing the entire article right from the beginning. #needsDesign #highFeasibility
  • T278494 Optimize copy 'Suggestion reason' meta information as the current copy ('Found in the following Wiki: trwiki') is not clear enough. #needsDevelopment #needsCopywriting #highFeasibility
  • T278530 Might be worth to explore making the 'Suggestion reason' more prominent as participants rated its usefulness the lowest (likely due to low discoverability) #needsDesign #mediumFeasibility
  • T278532 Optimize the 'No' and 'Not sure' dialog copy to reflect that multiple options can be selected. Some participants weren’t aware that multiple reasons can be selected. #needsCopywriting #highFeasibility
  • T278496 Optimize copy of the 'opt-in' onboarding screen, as there’s an unnecessary word at the moment ('We would you like (...)'). #needsCopywriting #highFeasibility
  • T278497 Suppress “Sync reading list” dialog within Suggested edits as it’s distracting from the task at hand. #needsDevelopment #highFeasibility
  • T278501 Incorporate gesture to swipe back and forth between image suggestions in the feed, as participants were intuitively applying the gestures. Per previous conversation with @Dbrant, there are questions that need to be answered from a technical perspective. #needsDevelopment #mediumFeasibility
  • T278533 Optimize design of positive reinforcement element/counter on the Suggested edits home screen, as it was positioned too close the task’s title. #needsDesign #highFeasibility
  • T278534 Make it clear that reviewing the image metadata is a core part of the task. We can potentially do that by increasing the visual prominence and/or increase the affordance to promote always opening the metadata screen. #needsDesign #mediumFeasibility
  • T278535 Optimize the discoverability of 'info' button at the top right as 2/5 participants had issues finding it. #needsDesign #mediumFeasibility
  • T278555 Save previous answer state: Given users are able to go back, the selection made in the previous image or images should be retained #needsDesign #mediumFeasibility
  • T278556 Reduce the font-size of the fields of the More details screen #needsDesign #highFeasibility
  • T278545 Change the goal count to 10/10 #needsDevelopment #highFeasibility

Nice to Have

  • T278546 Add "Cannot read the language" as a reason for rejection and unsure #needsDevelopment #highFesibility
  • T278557 Show the full image contained instead of a cropped image #needsDesign mediumFeasibility
  • T278548 Include the same metadata in the card - notably the suggestion reason (in addition to filename, image description and caption) on the more details screen as well. #needsDevelopment #lowFeasibility
  • T278549 Show success screen (see designs on Zeplin) when users complete daily goal (10/10 image suggestions) #needsDevelopment #highFeasibility
  • T278550 Explore tooltip "Got it" button #needsDevelopment #mediumFeasibility
  • T278552 Incorporate pinch to zoom functionality, as participants tried to zoom the image directly from the image suggestions feed. #needsDevelopment #lowFeasibility
  • T278558 Remove full screen overlay when transitioning to next image suggestion. This allows users to orient better and keep context after submitting an answer. #needsDesign #highFeasibility
  • T278561 Provide clear information that images come from Commons, or some more overt message about the image source and access to more metadata #needsDesign #highFeasibility

Related Objects

StatusSubtypeAssignedTask
ResolvedJTannerWMF
Resolvedscblr
ResolvedJTannerWMF
ResolvedSNowick_WMF
ResolvedSNowick_WMF
ResolvedDbrant
Resolvedscblr
ResolvedJTannerWMF
Resolvedscblr
Resolvedscblr
ResolvedDbrant
InvalidNone
ResolvedBPirkle
Resolved sdkim
ResolvedNone
Resolved Clarakosi
Resolvedgmodena
Resolved Clarakosi
Resolved sdkim
ResolvedBPirkle
ResolvedBPirkle
ResolvedJTannerWMF
Resolvedcooltey
Resolvedcooltey
ResolvedDbrant
ResolvedJTannerWMF
ResolvedDbrant
ResolvedJTannerWMF
ResolvedSharvaniharan
ResolvedSharvaniharan
Resolvedcooltey
ResolvedDbrant
Resolvedcooltey
ResolvedSharvaniharan
ResolvedDbrant
ResolvedDbrant
ResolvedDbrant
ResolvedDbrant
ResolvedDbrant
ResolvedDbrant
ResolvedSharvaniharan
Resolvedcooltey
ResolvedJTannerWMF
DeclinedNone
Resolvedcooltey
ResolvedDbrant
DeclinedNone
ResolvedDbrant
ResolvedSharvaniharan
Resolvedcooltey
ResolvedSharvaniharan
Resolvedscblr
InvalidNone
ResolvedDbrant
InvalidDbrant
ResolvedSharvaniharan

Event Timeline

There are a very large number of changes, so older changes are hidden. Show Older Changes
LGoto updated the task description. (Show Details)

@JTannerWMF and team -- congratulations on the Android beta! I think it turned out well! Here’s my one feedback: it’s about the wording of the “Suggestion reason” field.

  • For the Commons category one, it says, “Image was found on Wikimedia Commons, Wikipedia’s source of freely usable images”. This is pretty long, and gets cut off at like the word "source".
  • For the Wikidata one, it says something similar about how the image was found on Wikidata (I haven’t seen one of those in a few suggestions and didn’t record the exact text).

I think the misleading thing is that all the images are found on Wikimedia Commons, regardless of suggestion reason, and even the ones that are sourced through Wikidata are also still found on Commons. I think a couple good options for the wording for these two suggestion reasons are:

  • Commons category: “Image was connected with this topic on Wikimedia Commons”
  • Wikidata: “Image was connected with this topic on Wikidata”
Dbrant closed subtask T280273: Lock in portrait mode as Resolved.