Compare commits

..

No commits in common. "bcfaa0e5fbbadca42451b18dadec3703ac99701f" and "0a65f25b602f001188f5ef46d92bb767333c4f84" have entirely different histories.

10 changed files with 8244 additions and 10199 deletions

View file

@ -1,45 +0,0 @@
---
title: The Design of Everyday Things
date: 2024-02-18
---
Harvard Business School marketing professor Theodore Levitt once pointed out, "People don't want to buy a quarter-inch drill. They want a quarter-inch hole!" Levitt's example of the drill implying that the goal is really a hole is only partially correct, however. When people go to a store to buy a drill, that is not their real goal. But why would anyone want a quarter-inch hole? Clearly that is an intermediate goal. Perhaps they wanted to hang shelves on the wall. Levitt stopped too soon.
Once you realize that they don't really want the drill, you realize that perhaps they don't really want the hole, either: they want to install their bookshelves. Why not develop methods that don't require holes? Or perhaps books that don't require bookshelves. (Yes, I know: electronic books, e-books.)
--- Don Norman, pp 43--4
Designers should strive to minimize the change of inappropriate actions in the first place by using affordances, signifiers, good mapping, and constraints to guide the actions. If a person performs an inappropriate action, the design should maximize the chance that this can be discovered and then rectified. This requires good, intelligible feedback coupled with a simple, clear conceptual model. When people understand what has happened, what state the system is in, and what the most appropriate set of actions is, they can perform their activities more effectively.
--- ibid (and the larger 'credo about errors'), p 67
(seven fundamental principles of design):
1. Discoverability
2. Feedback
3. Conceptual model
4. Affordances
5. Signifiers
6. Mappings
7. Constraints
--- ibid (pp 72--3)
…although long-term residents of Britain complained that they confused the one-pound coin with the five-pence coin, newcomers (and children) did not have the same confusion. This is because the long-term residents were working with their original set of descriptions, which did not easily accommodate the distinctions between these two coins. Newcomers, however, started off with no preconceptions and therefore formed a set of descriptions to distinguish among all the coins…
What gets confused depends heavily upon history: the aspects that have allowed us to distinguish among the objects in the past. When the rules for discrimination change, people can become confused and make errors. With time, they will adjust and learn to discriminate just fine and may even forget the initial period of confusion.
--- ibid, p 82
Make something more secure, and it becomes less secure.
--- ibid, p 90
Four kinds of constraint:
1. Physical
2. Cultural
3. Semantic
4. Logical
--- ibid, p 125
The American psychologists Charles Carver and Michael Scheier suggest that goals have three fundamental levels that control activities. Be-goals are at the highest, most abstract and govern a person's being: they determine why people act, are fundamental and long lasting, and determine one's self-image. Of far more practical concern for everyday activity is the next level down, the do-goal. Do-goals determine the plans and actions to be performed for an activity. The lowest level of this hierarchy is the motor-goal, which specified just how the actions are performed: this is more at the level of tasks and operations rather than activities.
--- ibid, p 233
Brynjolfsson 2011 & 2012, mid chess players and mid machines beat best players or best machines (p 287)
The society of the future: something to look forward to with pleasure, contemplation, and dread.
--- ibid, p 291

View file

@ -1,46 +1,4 @@
[ [
{
"publishers": [
"PAN MACMILLAN"
],
"physical_format": "paperback",
"title": "Permanent Record",
"covers": [
10118259
],
"isbn_13": "9781529035667",
"full_title": "Permanent Record",
"isbn_10": "152903566X",
"publish_date": "Sep 14, 2019",
"authors": [
{
"id": "OL7618561A",
"name": "Edward Snowden"
}
],
"work": {
"id": "OL20080323W",
"title": "Permanent Record",
"subjects": [
"Snowden, edward j., 1983-",
"Government information",
"Whistle blowing",
"Leaks (Disclosure of information)",
"Officials and employees",
"United States",
"United States. National Security Agency",
"Biography",
"nyt:combined-print-and-e-book-nonfiction=2019-10-06",
"New York Times bestseller",
"New York Times reviewed",
"Electronic surveillance"
]
},
"ol_id": "OL28181327M",
"date_added": "2019-09-14",
"date_started": "2024-02-20",
"added_by_id": "9781529035667"
},
{ {
"description": { "description": {
"type": "/type/text", "type": "/type/text",
@ -49,7 +7,7 @@
"full_title": "Nonviolent communication a language of life", "full_title": "Nonviolent communication a language of life",
"authors": [ "authors": [
{ {
"ol_id": "OL243612A", "ol_author_id": "OL243612A",
"name": "Marshall B. Rosenberg" "name": "Marshall B. Rosenberg"
} }
], ],
@ -77,7 +35,7 @@
"PuddleDancer Press" "PuddleDancer Press"
], ],
"work": { "work": {
"ol_id": "OL2018966W", "ol_work_id": "OL2018966W",
"title": "Nonviolent Communication", "title": "Nonviolent Communication",
"first_publish_date": "1999", "first_publish_date": "1999",
"subjects": [ "subjects": [
@ -110,7 +68,7 @@
"Self-improvement" "Self-improvement"
] ]
}, },
"ol_id": "OL27210498M", "ol_edition_id": "OL27210498M",
"date_added": "2019-11-09", "date_added": "2019-11-09",
"date_started": "2024-02-13", "date_started": "2024-02-13",
"added_by_id": "9781892005281" "added_by_id": "9781892005281"
@ -129,7 +87,7 @@
"publish_date": "Apr 02, 2017", "publish_date": "Apr 02, 2017",
"authors": [ "authors": [
{ {
"ol_id": "OL7477772A", "ol_author_id": "OL7477772A",
"name": "Martin Kleppmann" "name": "Martin Kleppmann"
} }
], ],
@ -137,7 +95,7 @@
"976434277" "976434277"
], ],
"work": { "work": {
"ol_id": "OL19293745W", "ol_work_id": "OL19293745W",
"title": "Designing Data-Intensive Applications", "title": "Designing Data-Intensive Applications",
"subjects": [ "subjects": [
"Development", "Development",
@ -153,7 +111,7 @@
"005.276" "005.276"
] ]
}, },
"ol_id": "OL26780701M", "ol_edition_id": "OL26780701M",
"date_added": "2021-06-26", "date_added": "2021-06-26",
"date_started": "2024-01-17", "date_started": "2024-01-17",
"added_by_id": "9781449373320" "added_by_id": "9781449373320"
@ -233,7 +191,7 @@
], ],
"isbn_13": "9781788680523", "isbn_13": "9781788680523",
"work": { "work": {
"ol_id": "OL15419603W", "ol_work_id": "OL15419603W",
"title": "France", "title": "France",
"subjects": [ "subjects": [
"Guidebooks", "Guidebooks",
@ -245,7 +203,7 @@
"Europe - France" "Europe - France"
] ]
}, },
"ol_id": "OL50982390M", "ol_edition_id": "OL50982390M",
"date_added": "2024-01-02", "date_added": "2024-01-02",
"date_started": "2023-12-25" "date_started": "2023-12-25"
} }

File diff suppressed because it is too large Load diff

File diff suppressed because it is too large Load diff

File diff suppressed because it is too large Load diff

View file

@ -1,18 +1,6 @@
[ [
{ {
"tmdb_id": 2418, "id": 2710,
"origin_country": [
"US"
],
"overview": "Hank and Dean Venture, with their father Doctor Venture and faithful bodyguard Brock Samson, go on wild adventures facing megalomaniacs, zombies, and suspicious ninjas, all for the glory of adventure. Or something like that.",
"poster_path": "/ckQE1aLYQkRpp2HmHljiELAiOr1.jpg",
"first_air_date": "2004-08-07",
"name": "The Venture Bros.",
"date_added": "2024-01-17",
"date_started": "2024-02-11"
},
{
"tmdb_id": 2710,
"name": "It's Always Sunny in Philadelphia", "name": "It's Always Sunny in Philadelphia",
"overview": "Four egocentric friends run a neighborhood Irish pub in Philadelphia and try to find their way through the adult world of work and relationships. Unfortunately, their warped views and precarious judgments often lead them to trouble, creating a myriad of uncomfortable situations that usually only get worse before they get better.", "overview": "Four egocentric friends run a neighborhood Irish pub in Philadelphia and try to find their way through the adult world of work and relationships. Unfortunately, their warped views and precarious judgments often lead them to trouble, creating a myriad of uncomfortable situations that usually only get worse before they get better.",
"poster_path": "/pRWO6ufqSNkWvPXDDQhBwPNSv4K.jpg", "poster_path": "/pRWO6ufqSNkWvPXDDQhBwPNSv4K.jpg",
@ -24,7 +12,7 @@
"added_by_id": "tt0472954" "added_by_id": "tt0472954"
}, },
{ {
"tmdb_id": 242807, "id": 242807,
"name": "Skibidi Toilet", "name": "Skibidi Toilet",
"overview": "Skibidi Toilet is a apocalyptic series where camera-mans fight with the skibidi toilets.", "overview": "Skibidi Toilet is a apocalyptic series where camera-mans fight with the skibidi toilets.",
"poster_path": "/4YtVG3wrFYwt4JjQKiasqWdweLV.jpg", "poster_path": "/4YtVG3wrFYwt4JjQKiasqWdweLV.jpg",
@ -34,7 +22,7 @@
"added_by_id": "tt27814427" "added_by_id": "tt27814427"
}, },
{ {
"tmdb_id": 87917, "id": 87917,
"origin_country": [ "origin_country": [
"US" "US"
], ],

File diff suppressed because it is too large Load diff

File diff suppressed because it is too large Load diff

View file

@ -1,11 +1,5 @@
""" """
Add a new item to a media catalogue, using various APIs: Add a new item to a media catalogue, using various APIs.
- TV series' and films using the TMDB API and IDs;
- TV episodes using the TMDB API and TVDB IDs (because the TMDB
API is difficult and a lot of TMDB records don't have IMDB IDs);
- books using the OpenLibrary API and ISBNs; and
- games using the GiantBomb API and IDs.
""" """
import json import json
@ -49,9 +43,12 @@ logger = setup_logger()
load_dotenv() load_dotenv()
TMDB_API_KEY = os.getenv("TMDB_API_KEY") TMDB_API_KEY = os.getenv("TMDB_API_KEY")
TVDB_API_KEY = os.getenv("TVDB_API_KEY")
if "" == TMDB_API_KEY: if "" == TMDB_API_KEY:
logger.error("TMDB API key not found") logger.error("TMDB API key not found")
if "" == TVDB_API_KEY:
logger.error("TVDB API key not found")
def return_if_exists(item_id, media_type, log) -> dict | None: def return_if_exists(item_id, media_type, log) -> dict | None:
@ -94,12 +91,7 @@ def delete_existing(item_id, media_type, log) -> None:
def check_for_existing(item_id, media_type, log) -> dict[dict, str]: def check_for_existing(item_id, media_type, log) -> dict[dict, str]:
""" """Check for an existing item and move it to the specified log if requested"""
Check for an existing item in the current log, and pull the
`date_added` etc. and mark it as a repeat if so.
Otherwise, check for an existing item in the other logs, and move
it to the specified log if so.
"""
logger.info(f"Checking for '{item_id}' in logs…") logger.info(f"Checking for '{item_id}' in logs…")
@ -142,11 +134,6 @@ def add_item_to_log(item_id, media_type, log) -> None:
if item is None: if item is None:
raise Exception("No item found") raise Exception("No item found")
if "books" == media_type and "wishlist" != log:
item, log_to_delete = check_for_existing(item['work']['ol_id'], media_type, log)
if item is None:
item, log_to_delete = check_for_existing(item['ol_id'], media_type, log)
if log in ["log", "current"]: if log in ["log", "current"]:
if "date_started" not in item and media_type in ["books", "tv-series", "games"]: if "date_started" not in item and media_type in ["books", "tv-series", "games"]:
date_started = "" date_started = ""
@ -209,28 +196,28 @@ def import_by_id(import_id, media_type, log) -> dict:
return import_from_tmdb_by_id(import_id, media_type) return import_from_tmdb_by_id(import_id, media_type)
if media_type in ["tv-episodes"]: if media_type in ["tv-episodes"]:
return import_from_tmdb_by_external_id(import_id, media_type) return import_from_tmdb_by_imdb_id(import_id, media_type)
if media_type in ["books"]: if media_type in ["books"]:
if "wishlist" == log: if "wishlist" == log:
return import_from_openlibrary_by_ol_key(import_id) return import_from_openlibrary_by_ol_key(import_id)
else: else:
return import_from_openlibrary_by_isbn( return import_from_openlibrary_by_id(
"".join(re.findall(r"\d+", import_id)), media_type "".join(re.findall(r"\d+", import_id)), media_type
) )
def import_from_tmdb_by_external_id(external_id, media_type) -> dict: def import_from_tmdb_by_imdb_id(imdb_id, media_type) -> dict:
"""Retrieve a film, TV show or TV episode from TMDB using an IMDB or TVDB ID""" """Retrieve a film, TV show or TV episode from TMDB using an IMDB ID"""
api_url = f"https://api.themoviedb.org/3/find/{external_id}" api_url = f"https://api.themoviedb.org/3/find/{imdb_id}"
# Sending API request # Sending API request
response = requests.get( response = requests.get(
api_url, api_url,
headers={"Authorization": f"Bearer {TMDB_API_KEY}"}, headers={"Authorization": f"Bearer {TMDB_API_KEY}"},
params={"external_source": "imdb_id" if re.search("tt[0-9]+", external_id) else "tvdb_id"}, params={"external_source": "imdb_id"},
timeout=15 timeout=15
) )
@ -240,7 +227,7 @@ def import_from_tmdb_by_external_id(external_id, media_type) -> dict:
elif 429 == response.status_code: elif 429 == response.status_code:
time.sleep(2) time.sleep(2)
return import_from_tmdb_by_external_id(external_id, media_type) return import_from_tmdb_by_imdb_id(imdb_id, media_type)
else: else:
raise Exception(f"Error {response.status_code}: {response.text}") raise Exception(f"Error {response.status_code}: {response.text}")
@ -255,7 +242,7 @@ def import_from_tmdb_by_external_id(external_id, media_type) -> dict:
response_data = json.loads(response.text)[key][0] response_data = json.loads(response.text)[key][0]
if response_data == None: if response_data == None:
raise Exception(f"Nothing found for TVDB ID {external_id}!") raise Exception(f"Nothing found for IMDB ID {imdb_id}!")
# Modify the returned result to add additional data # Modify the returned result to add additional data
return cleanup_result(response_data, media_type) return cleanup_result(response_data, media_type)
@ -264,6 +251,9 @@ def import_from_tmdb_by_external_id(external_id, media_type) -> dict:
def import_from_tmdb_by_id(tmdb_id, media_type) -> dict: def import_from_tmdb_by_id(tmdb_id, media_type) -> dict:
"""Retrieve a film, TV show or TV episode from TMDB using an TMDB ID""" """Retrieve a film, TV show or TV episode from TMDB using an TMDB ID"""
if "tv-episodes" == media_type:
raise Exception("TV Episodes are TODO!")
api_path = "movie" if "films" == media_type else "tv" api_path = "movie" if "films" == media_type else "tv"
api_url = f"https://api.themoviedb.org/3/{api_path}/{tmdb_id}" api_url = f"https://api.themoviedb.org/3/{api_path}/{tmdb_id}"
@ -289,7 +279,7 @@ def import_from_tmdb_by_id(tmdb_id, media_type) -> dict:
return cleanup_result(response_data, media_type) return cleanup_result(response_data, media_type)
def import_from_openlibrary_by_isbn(isbn, media_type) -> dict: def import_from_openlibrary_by_id(isbn, media_type) -> dict:
"""Retrieve a film, TV show or TV episode from TMDB using an IMDB ID""" """Retrieve a film, TV show or TV episode from TMDB using an IMDB ID"""
logging.info(f"Importing '{isbn}'") logging.info(f"Importing '{isbn}'")
@ -305,7 +295,7 @@ def import_from_openlibrary_by_isbn(isbn, media_type) -> dict:
elif 429 == response.status_code: elif 429 == response.status_code:
time.sleep(2) time.sleep(2)
return import_from_openlibrary_by_isbn(isbn, media_type) return import_from_openlibrary_by_id(isbn, media_type)
elif 404 == response.status_code: elif 404 == response.status_code:
logger.error(f"{response.status_code}: Not Found for ISBN '{isbn}'") logger.error(f"{response.status_code}: Not Found for ISBN '{isbn}'")
@ -387,7 +377,7 @@ def import_from_openlibrary_by_ol_key(key) -> dict:
item = json.loads(response.text) item = json.loads(response.text)
if "authors" == mode: if "authors" == mode:
author = {"ol_id": ol_id, "name": item["name"]} author = {"id": ol_id, "name": item["name"]}
if "personal_name" in item: if "personal_name" in item:
if item["name"] != item["personal_name"]: if item["name"] != item["personal_name"]:
@ -404,7 +394,7 @@ def import_from_openlibrary_by_ol_key(key) -> dict:
return author return author
if "works" == mode: if "works" == mode:
work = {"ol_id": ol_id, "title": item["title"]} work = {"id": ol_id, "title": item["title"]}
for result_key in ["first_publish_date", "subjects"]: for result_key in ["first_publish_date", "subjects"]:
if result_key in item: if result_key in item:
@ -447,7 +437,6 @@ def cleanup_result(item, media_type) -> dict:
"popularity", # TMDB "popularity", # TMDB
"production_code", # TMDB "production_code", # TMDB
"production_companies", # TMDB "production_companies", # TMDB
"publish_places", # OpenLibrary
"revenue", # TMDB "revenue", # TMDB
"revision", # OpenLibrary "revision", # OpenLibrary
"runtime", # TMDB "runtime", # TMDB
@ -467,8 +456,8 @@ def cleanup_result(item, media_type) -> dict:
if field_name in item: if field_name in item:
del item[field_name] del item[field_name]
if media_type in ["films", "tv-series", "tv-episodes"]: if media_type in ["films", "tv-series"]:
item["tmdb_id"] = item["id"] item["id"] = item["tmdb_id"]
del item["id"] del item["id"]
title_key = "name" if "tv-series" == media_type else "title" title_key = "name" if "tv-series" == media_type else "title"
@ -480,10 +469,6 @@ def cleanup_result(item, media_type) -> dict:
): ):
del item[f"original_{title_key}"], item["original_language"] del item[f"original_{title_key}"], item["original_language"]
if "tv-episodes" == media_type:
item['series']['tmdb_id'] = item['show_id']
del item['show_id']
if "books" == media_type: if "books" == media_type:
_, _, item["ol_id"] = item["key"].split("/") _, _, item["ol_id"] = item["key"].split("/")
del item["key"] del item["key"]
@ -495,6 +480,10 @@ def cleanup_result(item, media_type) -> dict:
item[key] = item[key][0] item[key] = item[key][0]
if "publish_places" in item:
item["published_in"] = item["publish_places"]
del item["publish_places"]
if "languages" in item: if "languages" in item:
item["languages"] = [ item["languages"] = [
lang["key"].split("/")[2] for lang in item["languages"] lang["key"].split("/")[2] for lang in item["languages"]
@ -572,7 +561,7 @@ def main() -> None:
while re.search("[0-9]+", item_id) is None: while re.search("[0-9]+", item_id) is None:
item_id = input("Enter TMDB ID: ") item_id = input("Enter TMDB ID: ")
add_item_to_log(re.search("[0-9]+", item_id)[0], media_type, log) add_item_to_log(item_id, media_type, log)
except Exception: except Exception:
logger.exception("Exception occurred") logger.exception("Exception occurred")

View file

@ -33,17 +33,9 @@ def process_log(media_type, log) -> None:
log_item_values = {} log_item_values = {}
id_key = ""
if "books" == media_type:
id_key = "ol_id"
elif media_type in ["films", "tv-series", "tv-episodes"]:
id_key = "tmdb_id"
elif "games" == media_type:
id_key = "gb_id"
for i, item in enumerate(log_items): for i, item in enumerate(log_items):
try: try:
if id_key not in item and "skip" not in item: if "id" not in item and "skip" not in item:
if media_type in ["films", "books"]: if media_type in ["films", "books"]:
item_title = item["Title"] item_title = item["Title"]
elif "tv-episodes" == media_type: elif "tv-episodes" == media_type:
@ -58,16 +50,10 @@ def process_log(media_type, log) -> None:
log_item_values["date_added"] = item["Date Added"] log_item_values["date_added"] = item["Date Added"]
del item["Date Added"] del item["Date Added"]
if "date_added" in item:
log_item_values["date_added"] = item["date_added"]
if "Date Started" in item: if "Date Started" in item:
log_item_values["date_started"] = item["Date Started"] log_item_values["date_started"] = item["Date Started"]
del item["Date Started"] del item["Date Started"]
if "date_started" in item:
log_item_values["date_started"] = item["date_started"]
if "Date Finished" in item: if "Date Finished" in item:
log_item_values["date_finished"] = item["Date Finished"] log_item_values["date_finished"] = item["Date Finished"]
del item["Date Finished"] del item["Date Finished"]
@ -77,16 +63,10 @@ def process_log(media_type, log) -> None:
else: else:
raise Exception(f"'Date Read' != 'Date Finished' for {item['Title']}") raise Exception(f"'Date Read' != 'Date Finished' for {item['Title']}")
if "date_finished" in item:
log_item_values["date_finished"] = item["date_finished"]
if "Read Count" in item: if "Read Count" in item:
log_item_values["read_count"] = item["Read Count"] log_item_values["read_count"] = item["Read Count"]
del item["Read Count"] del item["Read Count"]
if "read_count" in item:
log_item_values["read_count"] = item["read_count"]
if "Date Watched" in item: if "Date Watched" in item:
log_item_values["date_finished"] = item["Date Watched"] log_item_values["date_finished"] = item["Date Watched"]
del item["Date Watched"] del item["Date Watched"]
@ -136,18 +116,11 @@ def process_log(media_type, log) -> None:
if "IMDB ID" in item and item["IMDB ID"] != "": if "IMDB ID" in item and item["IMDB ID"] != "":
new_log_item = import_by_id(item["IMDB ID"], media_type) new_log_item = import_by_id(item["IMDB ID"], media_type)
elif "books" == media_type and "wishlist" == log:
ol_work_id = re.search("OL[0-9]+W", input(f"Enter OpenLibrary Work ID for '{item_title}' ({item['Author']}): "))
try:
new_log_item = import_by_id(ol_work_id[0], media_type, log)
except:
logger.info("Skipping…")
elif "ISBN13" in item and item["ISBN13"] != "" and item["ISBN13"] is not None: elif "ISBN13" in item and item["ISBN13"] != "" and item["ISBN13"] is not None:
new_log_item = import_by_id(item["ISBN13"], media_type, log) new_log_item = import_by_id(item["ISBN13"], media_type)
elif "ISBN" in item and item["ISBN"] != "" and item["ISBN"] is not None: elif "ISBN" in item and item["ISBN"] != "" and item["ISBN"] is not None:
new_log_item = import_by_id(item["ISBN13"], media_type, log) new_log_item = import_by_id(item["ISBN"], media_type)
else: else:
new_log_item = import_by_details(item, item_title, media_type) new_log_item = import_by_details(item, item_title, media_type)
@ -190,7 +163,7 @@ def process_log(media_type, log) -> None:
else: else:
log_items[i] = new_log_item log_items[i] = new_log_item
if i % 3 == 0: if i % 10 == 0:
with open( with open(
f"./data/{media_type}/{log}.json", f"./data/{media_type}/{log}.json",
"w", "w",