International Murders
Are among the data for analysis in the tidyTuesday for December 10, 2019. These are made for a map.
library(tidyverse)
library(leaflet)
library(stringr)
library(sf)
library(here)
library(widgetframe)
library(htmlwidgets)
library(htmltools)
options(digits = 3)
set.seed(1234)
theme_set(theme_minimal())
library(tidytuesdayR)
tuesdata <- tt_load(2019, week = 50)
murders <- tuesdata$gun_murders
There isn’t much data so it should make this a bit easier. Now for some data. As it happens, the best way I currently know how to do this is going to involve acquiring a spatial frame.
Philadelphia Map
Use ggmap for the base layer.
library(ggmap); library(osmdata); library(tidyverse)
PHI <- get_map(getbb("Philadelphia, PA"), maptype = "stamen", zoom=12)
Get the Tickets Data
TidyTuesday covers 1.26 million parking tickets in Philadelphia.
tickets <- readr::read_csv("https://raw.githubusercontent.com/rfordatascience/tidytuesday/master/data/2019/2019-12-03/tickets.csv")
## Parsed with column specification:
## cols(
## violation_desc = col_character(),
## issue_datetime = col_datetime(format = ""),
## fine = col_double(),
## issuing_agency = col_character(),
## lat = col_double(),
## lon = col_double(),
## zip_code = col_double()
## )
Two Lines of Code Left
library(lubridate); library(ggthemes)
tickets <- tickets %>% mutate(Day = wday(issue_datetime, label=TRUE)) # use lubridate to extract the day of the month.
Searching and Mapping the Census
Searching for the Asian Population via the Census
To use tidycensus, there are limitations imposed by the available tables. There is ACS – a survey of about 3 million people – and the two main decennial census files [SF1] and [SF2]. I will search SF1 for the Asian population.
library(tidycensus); library(kableExtra)
library(tidyverse); library(stringr)
v10 <- load_variables(2010, "sf1", cache = TRUE)
v10 %>% filter(str_detect(concept, "ASIAN")) %>% filter(str_detect(label, "Female")) %>% kable() %>% scroll_box(width = "100%")
name
label
concept
P012D026
Total!
mathart
A cool package for math generated art that I just discovered. Here is the install code for it
install.packages(c("devtools", "mapproj", "tidyverse", "ggforce", "Rcpp"))
devtools::install_github("marcusvolz/mathart")
devtools::install_github("marcusvolz/ggart")
devtools::install_github("gsimchoni/kandinsky")
Load some libraries
library(mathart)
library(ggart)
library(ggforce)
library(Rcpp)
library(tidyverse)
Generate some Art?
This is quite fun to do.
set.seed(12341)
terminals <- data.frame(x = runif(10, 0, 10000), y = runif(10, 0, 10000))
df <- 1:10000 %>%
map_df(~weiszfeld(terminals, c(points$x[.], points$y[.])), .id = "id")
p <- ggplot() +
geom_point(aes(x, y), points, size = 1, alpha = 0.
Some Data for the Map
I want to get some data to place on the map. I found a website with population and population change data for Oregon in .csv format. I cannot direct download it from R, instead I have to button download it and import it.
library(tidyverse)
## ── Attaching packages ────────────────────────── tidyverse 1.3.0 ──
## ✓ ggplot2 3.2.1 ✓ purrr 0.3.3
## ✓ tibble 2.1.3 ✓ dplyr 0.
The Economist’s Errors and Credit Where Credit is Due
The Economist is serious about their use of data visualization and they have occasionally owned up to errors in their visualizations. They can be deceptive, uninformative, confusing, excessively busy, and present a host of other barriers to clean communication. Their blog post on their errors is great.
I have drawn the following example from a #tidyTuesday earlier this year that explores this.
So this Robert Mueller guy wrote a report
I may as well analyse it a bit.
First, let me see if I can get a hold of the data. I grabbed the report directly from the Department of Justice website. You can follow this link.
library(tidyverse)
library(pdftools)
# Download report from link above
mueller_report_txt <- pdf_text("../data/report.pdf")
# Create a tibble of the text with line numbers and pages
mueller_report <- tibble(
page = 1:length(mueller_report_txt),
text = mueller_report_txt) %>% separate_rows(text, sep = "\n") %>% group_by(page) %>% mutate(line = row_number()) %>% ungroup() %>% select(page, line, text)
write_csv(mueller_report, "data/mueller_report.
Scraping NFL data
Note: An original version of this post had issues induced by overtime games. There is a better way to handle all of this that I learned from a brief analysis of a tie game between Cleveland and Pittsburgh in Week One.
The nflscrapR package is designed to make data on NFL games more easily available. To install the package, we need to grab it from github.