Covid Isolation Week 1 Day 7 – Quiet Sunday

Life continues on…

This weekend has reminded me of weekends from my childhood where my parents would decide that they needed to get things sorted in the house and so that’s what happened. My parents would get engrossed in chores of various descriptions leaving me and my sister to entertain ourselves.

It led to the type of boredom that fuelled creativity after a lot of sitting around watching TV. Which makes me wonder if this imposed seclusion will lead to Isaac having more appreciation for things like school and the great outdoors. Probably not – but there is always hope.

I’m hoping that the week to come establishes more of a regular pattern for him and that it helps settle his mood. It’s been such a drastic change to everyone’s lives and at the age of 7 I think he struggles to comprehend what it all means. He had bad dreams last night which he refused to talk about – and given the increasing levels of frustration (brought on by not seeing anyone other than his parents) I’m worried how he will cope with another 11 weeks of this. But we’re continuing to talk to him and give him the time and space to discuss things on his mind…. fingers crossed this helps… it all feels a little bit like being in uncharted territory.

However, video calling has been a life saver along with letting him play Roblox online with Will… socialising in a way I suppose. Maybe this is the step change in society that make video calling mainstream.

Highlight of today – going for a long walk through the woods….with a few tricky moments where the rules of social distancing needed some thought as we encountered people on the paths…

Covid-19 Symptom Check

Everyone seems fit and healthy…no fever and no coughs today.
Although Steph was suffering with a sore throat yesterday and a headache – potentially linked to too much wine Friday night.

Covid Isolation Week 1 Day 5 – circadian rhythms

Today has been a tough one – and not just for us. From reading various social media channels it’s been the end of a tough old week, and lot’s of people have struggled. Juggling work and childcare…worrying where the next toilet roll is coming from, as well as the inevitable lecturing of parents who don’t seem to get the principal of isolation. It even stopped Isaac’s face-time call with one of his friends who had to go to bed early due to behaviour – Isaac had came close through the day as well.

New things today – restrictions on how many people are allowed into supermarkets at once, as well as enforced distancing while you wait to go in! Also social media is filling up with more videos of people who have caught the disease – and we’re not yet at the peak.

But, touch wood, we’re all doing good – apart from the cabin fever.

Still the biggest worry is one of the older generation of the family getting it, and although they are staying (for the most part) isolation isn’t something they’re getting as a concept. Popping to a shop if absolutely needed is ok….but popping to 5 shops and the bank is rolling those dice too many times. It’s also odd that, despite being all over 70 (Ken is 80+?), none of them have received a letter from the local NHS…maybe it’s different in Wales to how the media is portraying things. In any case the stern advice from us is to keep inside, keep safe, let us ferry food if the online deliveries aren’t an option.

Covid-19 Symptom Check

Isaac’s cold is no more – hoora!
Steph had a sore throat this morning (snoring maybe?)
…as for me I think my hay-fever has started…which the hypochondriac in me insists is Corona – but no temp 🙂 

Isaac’s random questions

To pass the time my cousin posed some questions for the kids on Facebook…here are Isaac’s answers:-

1. If you won a million pounds, what would you buy?
Isaac – a house and an iPhone

2. How long does it take to get to America?
Isaac – 2 hours

3. What does mum always say to you?
Isaac – go for a shower (Mum)… Do this do that (Dad)

4. What job would you like to do when you’re big?
ISAAC – fireman

5. What is the capital of England?
Isaac – London

6. Where do babies come from?
Isaac – mummy’s tummy simple

7. At what age do you become an adult?
Isaac – 20 (no longer a teenager)

8. If you could change one rule your family has, what would it be?
Isaac – going to school

9. If you could be a superhero, what superpower would you have?
Isaac – invisabliliy, speed, flight

10. What would you do to save the planet?
Isaac – recycle and fair trade

11. If you could eat one thing for the rest of your life, what would it be?
Isaac – Chocolate!

12. How much does it cost to buy a house?
Isaac – £100

13. Why do you think we should be nice to other people?
Isaac – so it will be a peaceful planet

14. What does love mean to you?
Isaac – someone cares for you and is always on your side and protect you

15. What are you scared of?
Isaac – pennywise and Corona virus

16. What is so important?
Isaac – mum, dad, school, family, friends

Covid Isolation Week 1 Day 4 – Quiet times

It’s amazing how quiet things are getting.

During the day there are still cars around and people going for walks but at night it gets eerily quiet – no one out driving to those social events or running errands. 

One thing I’ve noticed today is far more sirens – generally far off. Maybe it’s just because there’s less traffic generally so less of the normal city sounds but 3-4 times today I noticed sirens of ambulances or police cars… I pray who ever is in need to the sirens is ok – which again might be a factor in me noticing more.  

So the new routine continues…the new norm…
Get-up, Tea, sometimes breakfast, log on, work, calls, work, more calls, make tea, lunch, catch-up, hand over, shower (I know late), walk, maths, reading, science, snack, log back on, more calls, emails, apologies, tea, work-out, bed time, dinner, TV, bed…rinse repeat.

Its harder on Isaac I think. The initial joy at not having to go to school is slowly giving way to boredom, which isn’t helped by him being an only child. But we’re trying to keep him occupied and the school have been great at setting him things to do – he’s even started a diary.

It also struck me today how I’m slowly losing track of what day it is…without those typical, familiar queues it all just becomes another day… We’re not even through week 1 and I’m sure life will get more and more surreal…

In other events, I saw my mum today when she dropped off some cardboard for Isaac to build a castle from. Oh and some random flapjacks that she made to free up space in her kitchen. Think we’ll tackle the model building on the weekend (the flapjacks didn’t last a day). I’m trying to get him to build a model of Caldicot Castle but he has his heart set on Castle Coch – because red is his fave colour. 

Covid-19 Symptom Check

Apart from the tail end of Isaac’s cold we’re all ok 🙂

 

Machine Learning – Tutorial 5

Regression – Forecasting and Predicting

This next tutorial covers using the trained regression model to forecast out data. Full notes here:-

https://pythonprogramming.net/forecasting-predicting-machine-learning-tutorial/

Key takeaways:-

import pandas as pd
import quandl, math, datetime #imports Math, datetime and Quandl
import numpy as np # support for arrays
from sklearn import preprocessing, model_selection, svm #machine learning and
from sklearn.linear_model import LinearRegression # regression
import matplotlib.pyplot as plt
from matplotlib import style

style.use('ggplot')

quandl.ApiConfig.api_key = "9qfnyWSTDUpx6uhNX2dc"

df = quandl.get('WIKI/GOOGL') #import data from Qunadl
# print (df.head()) # print out the head rows of the data to check what we're getting

# create a dataframe
df = df[['Adj. Open', 'Adj. High', 'Adj. Low', 'Adj. Close','Adj. Volume']]
df['HL_pct'] = ((df['Adj. High'] - df['Adj. Close']) / df['Adj. Close']) * 100
df['pct_Change'] = ((df['Adj. Close'] - df['Adj. Open']) / df['Adj. Open']) * 100

df = df[['Adj. Close','HL_pct','pct_Change','Adj. Volume']]


forecast_col = 'Adj. Close' # define what we're forcasting
df.fillna(-99999, inplace=True) #replaces missing data with an outlier value (-99999) rather that getting rid of any data

forecast_out = int(math.ceil(0.01*len(df))) # math.ceil rounds everything up to the nearest whole - so this formula takes 1% of the length of the datafram, rounds this up  and finally converts it to an interger

df['label'] = df[forecast_col].shift(-forecast_out) # so this adds a new column 'label' that contains the 'Adj. Close' value from ~1 days in future(?)

# print (df.head()) #just used to check data

X = np.array(df.drop(['label'],1)) # everything except the lable column; this returns a new dataframe that is then converted to a numpy array and stored as X
X = preprocessing.scale(X) # scale X before classifier - this can help with performance but can also take longer: can be skipped
X_lately = X[-forecast_out:] # used to predict against - note there are no y values for these to check against
X = X[:-forecast_out] # needs to happen after scaling



df.dropna(inplace=True)
y = np.array(df['label'])
y = np.array(df['label']) # array of labels

### creat training and testing sets
X_train, X_test, y_train, y_test = model_selection.train_test_split(X, y, test_size=0.2) # 0.2 = 20% of the datafram

### Swapping different algorythms
# clf = LinearRegression() # simple linear regressions
# clf = LinearRegression(n_jobs=10) # linear regression using threading, 10 jobs at a time = faster
clf = LinearRegression(n_jobs=-1) # linear regression using threading with as many jobs as preprocessor will handle
# clf = svm.SVR() # base support vector regression
# clf = svm.SVR(kernel="poly") # support vector regression with specific kernel

clf.fit(X_train, y_train) # fit the data to the training data
accuracy = clf.score(X_test, y_test) # score it against test

# print(accuracy)

### pridiction - easy once the classifier is sets

forecast_set = clf.predict(X_lately)
print (forecast_set, accuracy, forecast_out)
df['Forecast'] = np.nan

last_date = df.iloc[-1].name
last_unix = last_date.timestamp()
one_day = 86400
next_unix = last_unix + one_day # moving to future dates not in dataset

## Set a new Data Frame including dates with the forecast values
for i in forecast_set:
    next_date = datetime.datetime.fromtimestamp(next_unix)
    next_unix += 86400
    df.loc[next_date] = [np.nan for _ in range(len(df.columns)-1)]+[i]

df['Adj. Close'].plot()
df['Forecast'].plot()
plt.legend(loc=4)
plt.xlabel('Date')
plt.ylabel('Price')
plt.savefig('ML_Tutorial5.svg', bbox_inches='tight') #bbox_inches='tight' minimises whitespace around the fig
plt.show()

Covid-19

There’s only been a few times when I felt I was living through history – Berlin Wall coming down, 911, 711… And the difference with all of those big events was I was always an observer, not caught up in them directly. 

This one is very different as it’s touching everyone with no part of the world not being impacted in some way shape or form.

So we’ve been in lock-down for about a week, with my boy (Isaac) being off school from Thursday, and so far it’s all been ok. It’s taken a bit of getting used to with replicating the school routine and it’s odd having to consciously put thought into shopping and food. It’s also struck me as slightly strange how people react and get into panic buying, which as a behaviour pattern always strikes me at Christmas (when the shops are only closed for 2 days).

Anyway I’ve kind of promised to myself to keep some notes through this period on how we’re coping – to document our part of it all.

Machine Learning – Tutorial 4

Regression – Training and Testing

This tutorial covered the first application of Regression to sample data.

https://pythonprogramming.net/training-testing-machine-learning-tutorial/

Key takeaways being:-

  • X & y – define features and labels respectively
  • Scaling data helps accuracy and performance but can take longer:
    Generally, you want your features in machine learning to be in a range of -1 to 1. This may do nothing, but it usually speeds up processing and can also help with accuracy. Because this range is so popularly used, it is included in the preprocessing module of Scikit-Learn. To utilize this, you can apply preprocessing.scale to your X variable:
  • Quandl limits the number of anonymous API calls, the work around requiring registration and the passing of a key with requests:-
    quandl.ApiConfig.api_key = “<the API Key>”
  • cross_validation is deprecated – model_selection can be used instead
    from sklearn import model_selection
    X_train, X_test, y_train, y_test = model_selection.train_test_split(X, y, test_size=0.2)
  • It’s easy to swap out different algorithms – some can be threaded allowing parallel processing  (check algorithm documentation for n_jobs)
  • n_jobs=-1 sets the number of jobs to the maximum number for the processor
import pandas as pd
import quandl, math #imports Math and Quandl
import numpy as np # support for arrays
from sklearn import preprocessing, model_selection, svm #machine learning and
from sklearn.linear_model import LinearRegression # regression

quandl.ApiConfig.api_key = "9qfnyWSTDUpx6uhNX2dc"

df = quandl.get('WIKI/GOOGL') #import data from Qunadl
# print (df.head()) # print out the head rows of the data to check what we're getting

# create a dataframe
df = df[['Adj. Open', 'Adj. High', 'Adj. Low', 'Adj. Close','Adj. Volume']]
df['HL_pct'] = ((df['Adj. High'] - df['Adj. Close']) / df['Adj. Close']) * 100
df['pct_Change'] = ((df['Adj. Close'] - df['Adj. Open']) / df['Adj. Open']) * 100

df = df[['Adj. Close','HL_pct','pct_Change','Adj. Volume']]


forecast_col = 'Adj. Close' # define what we're forcasting
df.fillna(-99999, inplace=True) #replaces missing data with an outlier value (-99999) rather that getting rid of any data

forecast_out = int(math.ceil(0.01*len(df))) # math.ceil rounds everything up to the nearest whole - so this formula takes 1% of the length of the datafram, rounds this up  and finally converts it to an interger

df['label'] = df[forecast_col].shift(-forecast_out) # so this adds a new column 'label' that contains the 'Adj. Close' value from ~1 days in future(?)
df.dropna(inplace=True)
# print (df.head()) #just used to check data

X = np.array(df.drop(['label'],1)) # everything except the lable column; this returns a new dataframe that is then converted to a numpy array and stored as X
y = np.array(df['label']) # array of labels

X = preprocessing.scale(X) # scale X before classifier - this can help with performance but can also take longer: can be skipped
y = np.array(df['label'])

### creat training and testing sets
X_train, X_test, y_train, y_test = model_selection.train_test_split(X, y, test_size=0.2) # 0.2 = 20% of the datafram

### Swapping different algorythms
# clf = LinearRegression() # simple linear regressions
# clf = LinearRegression(n_jobs=10) # linear regression using threading, 10 jobs at a time = faster
clf = LinearRegression(n_jobs=-1) # linear regression using threading with as many jobs as preprocessor will handle
# clf = svm.SVR() # base support vector regression
# clf = svm.SVR(kernel="poly") # support vector regression with specific kernel

clf.fit(X_train, y_train) # fit the data to the training data
accuracy = clf.score(X_test, y_test) # score it against test

print(accuracy)

Machine Learning – Tutorial 3

Regression – Features and Labels

So the first two tutorials basically introduced the topic and imported some stock data – straight forward. Biggest takeaway being the use of Quandl – I’ll be doing some research into them at a later date.

So this tutorial gets into the meat of regression using Numpy to convert data into Numpy Arrays for Sykit-learn to do its thing.

Quick note on features and labels:-

  • features are the descriptive attributes
  • labels are what we’re trying to predict or forecast

A common example with regression might be to try to predict the dollar value of an insurance policy premium for someone. The company may collect your age, past driving infractions, public criminal record, and your credit score for example. The company will use past customers, taking this data, and feeding in the amount of the “ideal premium” that they think should have been given to that customer, or they will use the one they actually used if they thought it was a profitable amount.

Thus, for training the machine learning classifier, the features are customer attributes, the label is the premium associated with those attributes.

import pandas as pd
import quandl, math #imports Math and Quandl

df = quandl.get('WIKI/GOOGL') #import data from Qunadl
# print (df.head()) # print out the head rows of the data to check what we're getting

# create a dataframe
df = df[['Adj. Open', 'Adj. High', 'Adj. Low', 'Adj. Close','Adj. Volume']]
df['HL_pct'] = ((df['Adj. High'] - df['Adj. Close']) / df['Adj. Close']) * 100
df['pct_Change'] = ((df['Adj. Close'] - df['Adj. Open']) / df['Adj. Open']) * 100

df = df[['Adj. Close','HL_pct','pct_Change','Adj. Volume']]


forecast_col = 'Adj. Close' # define what we're forcasting
df.fillna(-99999, inplace=True) #replaces missing data with an outlier value (-99999) rather that getting rid of any data

forecast_out = int(math.ceil(0.01*len(df))) # math.ceil rounds everything up to the nearest whole - so this formula takes 1% of the length of the datafram, rounds this up  and finally converts it to an interger

df['label'] = df[forecast_col].shift(-forecast_out) # so this adds a new column 'label' that contains the 'Adj. Close' value from ~1 days in future(?)
df.dropna(inplace=True)
print (df.head()) #just used to check data

Tutorial script here:-

https://pythonprogramming.net/features-labels-machine-learning-tutorial/?completed=/regression-introduction-machine-learning-tutorial/

Machine Learning – Tutorial 2

Regression Intro

import pandas as pd
import quandl

df = quandl.get('WIKI/GOOGL') #import data from Qunadl
# print (df.head()) # print out the head rows of the data to check what we're getting

# create a dataframe
df = df[['Adj. Open', 'Adj. High', 'Adj. Low', 'Adj. Close','Adj. Volume']]
df['HL_pct'] = ((df['Adj. High'] - df['Adj. Close']) / df['Adj. Close']) * 100
df['pct_Change'] = ((df['Adj. Close'] - df['Adj. Open']) / df['Adj. Open']) * 100

df = df[['Adj. Close','HL_pct','pct_Change','Adj. Volume']]
print (df.head()) #just used to che data