StreetCred

Atoms

Molecules

Color

Most color throughout the site should use one of the CSS custom properties from this section.

Color properties must begin with --color.

Color properties should be stored as a hex value by default. If a color needs to be partially transparent or have its alpha channel animated, a cooresponding custom property in rgb format should be created with the --rgb suffix.

Usage Examples

Standard

.accented {
  background-color: var(--color-accent-primary);
}

rgba

.semi-transparent {
  background-color: rgba(var(--color-white-rgb), 0.75);
}

Accent colors

  • --color-accent-primary #8428ff
  • --color-accent-primary-rgb 132, 40, 255
  • --color-accent-primary-dark #46039e
  • --color-accent-secondary #e81640
  • --color-accent-secondary-dark #b10002
  • --color-accent-tertiary #3665ff
  • --color-accent-tertiary-rgb 54, 101, 255
  • --color-accent-tertiary-dark #3665ff
  • --color-accent-orange #FF7800
  • --color-accent-orange-rgb 255, 120, 0
  • --color-accent-yellow #ffd500
  • --color-accent-yellow-rgb 255, 213, 0
  • --color-accent-green #00e866
  • --color-accent-green-rgb 0, 232, 102

Common colors

  • --color-text #230b42
  • --color-text-rgb 35, 11, 66
  • --color-white #ffffff
  • --color-white-rgb 255, 255, 255
  • --color-gray-1 #F8F8F8

Contest colors

  • --color-contests #FF7800
  • --color-worldGreen #47C000
  • --color-worldGreen-dark #163D00
  • --color-regionalBlue #36AEFF
  • --color-regionalBlue-dark #003d66
  • --color-localYellow #ffd500
  • --color-localYellow-dark #604b06
  • --color-gold-medal #f3c609
  • --color-gold-text #604b06
  • --color-gold-base #fff8df
  • --color-gold-border #ffedab
  • --color-silver-medal #bcc5cd
  • --color-silver-text #525e6a
  • --color-silver-base #f3f5f7
  • --color-silver-border #cfd0d1
  • --color-bronze-medal #f99841
  • --color-bronze-text #93591b
  • --color-bronze-base #fdf1e4
  • --color-bronze-border #f9e9d7

Typography

  • VAG Rundschrift D

    Regular weight (400) only. Used for headings h1-h4. via Typekit.

    --font-heading .font--heading
  • Inter

    Variable font. Used for all standard text. Weights 100-900 available. Regular and Italic styles. Hosted with site code.

    --font-body .font--body

Headings

Heading 1

Heading 2

Heading 3

Heading 4

Heading 5
Heading 6

Body Text

This is a regular paragraph of text set in our body font family.

A strong element
A b element
An emphasized element
An i element

Font Sizes

Instead of using various font sizes throughout CSS files, instead use predetermined custom properties. We use a base font size of 16px.

  • font size 1
    --font-size-1 1rem Base: 16px
  • font size 2
    --font-size-2 1.125rem Base: 18px
  • font size 3
    --font-size-3 1.25rem Base: 20px
  • font size 4
    --font-size-4 1.375rem Base: 22px
  • font size 5
    --font-size-5 1.625rem Base: 26px
  • font size 6
    --font-size-6 1.75rem Base: 28px
  • font size 7
    --font-size-7 2.25rem Base: 36px
  • font size 8
    --font-size-8 2.625rem Base: 42px
  • font size 9
    --font-size-9 2.875rem Base: 46px
  • font size 10
    --font-size-10 3.75rem Base: 60px
  • font size 11
    --font-size-11 4.875rem Base: 78px

Grid

This isn’t an attempt at a full, responsive CSS grid system. This is a set of classes and custom properties to help set the width and height of elements, margin, padding, etc. to keep layout consistent.

We offer classes and custom properties for setting widths on elements.

Percentage Custom Properties

Each of these produce a percentage value for setting flexible width.

--cols1
--cols2
--cols3
--cols4
--cols5
--cols6
--cols7
--cols8

Example usage

.container {
  width: var(--cols6);
}

Percentage Classes

Each of these apply a percentage value for setting flexible width.

.cols1
.cols2
.cols3
.cols4
.cols5
.cols6
.cols7
.cols8

Example usage

<div class="cols6"></div>

Flexbox Grid Example

This is a demonstration of using a containing element set to display: flex along with containing divs to produce rows. Rows are also set to display: flex. Each item is using a grid class to set its percentage width.

Buttons

Indicator

An animation to display while some activity is taking place. className is an option parameter.

  • {{> indicator className="visible" }}

Blog Post Preview

This is a handelbars partial used to preview a single blog post. In the examples below we’re accessing single blog posts using an index, but typical usage of blog-post-preview is within a loop over an array of posts.

For special cases where you need different markup than what's in the partial, you can use a blog-post-preview class on a containing element to mimic the styles for this molecule.

Standard Usage Example

Without Summary Example

Blog Post Figures

Blog posts contain different shapes and sizes of images. We offer different classes to be used on figure elements to help make images fit with text best depending on the size and shape of each image.

Figure Classes

.figure-center
.figure-left
.figure-right

Example usage

<figure class="figure-left">
  <img src="https://example.com/image.png" alt="My image" />
  <figcaption>A helpful caption</figcaption>
</figure>

Example blog post

The default figure with an image and no classes or styles. This works best for images at about 2:1 ratio. 1200 pixels wide is good.

Our results are provisional for now, reflecting a test dataset drawn from recent POI additions. The accuracy of users (described in detail at the end) represents the rate at which they submit correct data. From this test set, user accuracy averages around 80%. This is based on the accuracy assessment of 84.3% of the 674 users in the test data (excluding users who have only submitted non-validated data).

Users produce similarly accurate data across experience level.

The figure-center class applied to a figure element will cap the width of the image to the same width as the text and center it. This is also good for images at a 2:1 ratio, but not as big.

It turns out that the community as a whole does a reliably great job (see scatter plot above). Users produce similarly accurate data across experience level, whether they are new to the app or are leaderboard champs.

The high accuracy rate of users translates into even higher accuracy of the POIs they create. This is the result of aggregating multiple corroborating data points from independent users. For example, if two users submit a matching data point for a place, the odds of both being wrong is lower than of just one making a mistake.

Among all of the 42,000 POIs in the test data, the accuracy is above 75% (with the exception of 1 outlier; see figure above). These findings reflect the efficacy of our multi-user approach. By combining data across the community, StreetCred fundamentally improves the accuracy of the data it generates.

The distribution of POI accuracies reflects this underlying dynamic. The accuracy of POIs is bimodally distributed—one cluster corresponding with pending POIs ~82% accuracy and one highly accurate cluster corresponding with approved POIs ~98% accuracy.

The figure-left class applied to a figure element will cap the width of the image to 50% of the overall width and allow text to flow along its right side. This works well for smaller images that are either close to 1:1 or 4:5 ratio.

This can be further teased apart by looking at the distribution of accuracy by approved vs pending statuses (see figure above). The average accuracy is 98.4% for approved POIs and the distribution around the mean is extremely narrow, with most POI accuracy right around the mean. This suggests that validated POIs are nearly completely accurate.

For pending POIs, accuracy averages 84.8%, though with a wider distribution. Even pending POIs are generally quite accurate: more than one third of pending places having an estimated accuracy >90%.

For now, we have used this model to get a better understanding of the data generated by the community. There are a number of caveats and assumptions built into this version of the model, and this will require further refinement before we can draw broader conclusions about the overall dataset. As a first pass, this probabilistic approach illustrates how the quality of user submissions translates into even better results at the community level.

The first step is to come up with a consensus of all the user submissions for the true label for each data type (e.g., name, location, hours) for each POI. This is, in practice, equivalent to the current method used for validation: take the label for which at least two users independently agree to be accurate for each type of data.

The figure-right class applied to a figure element will cap the width of the image to 50% of the overall width and allow text to flow along its left side. This works well for smaller images that are either close to 1:1 or 4:5 ratio.

We add a slight twist on this approach by including a weight on votes, preferring answers from users who have historically provided accurate data (more on this in the next section). Specifically, the vote of user i is weighted using a log-odds ratio of the accuracy rate pi:

We choose the most likely label from this weighted voting routine as the tentative true label.

The user accuracy rate, p, represents the proportion of correct data to the total data provided by a user. This interpretation of accuracy is similar to reliability in that we are not comparing to a ground truth and cannot be completely certain of a systematic bias. However, the typical user should be able to accurately reflect the real state of a POI, given the nature of the information being observed, so this type of bias is unlikely to affect our interpretation of the results. Using the tentative true labels from Step 1, we assess how well each user performed with the accuracy rating, p. We take p to be the mean of a Beta distribution updated with a running tally of correct and incorrect data for each user.