您的位置:首页 > 编程语言 > Python开发

numpy,scipy,matplotlib,pandas等简明教程

2017-09-17 20:27 651 查看
numpy中文文档(updating…)

numpy,scipy,matplotlib,pandas,keras,scikit-learn简明实例教程

基础部分

numpy
的主要对象是一个同类元素的多维数组. 这是一个所有元素均为同种类型,并通过正整数元组来进行索引的元素(一般为数字)表. 在
numpy
中维度(dimensions)称之为轴(axes). 数目称之为秩(rank).

就比如,在3D空间中一个点的坐标[1, 2, 1]就是一个秩为1的数组,因为它仅有一个轴, 并且其长度为3. 又比如在下面的例子中,数组的秩为2(有两个维度),第一个维度(轴)的长度为2,第二个维度(轴)的长度为3.

[[1., 0., 0.],
[0., 1., 2.]]


numpy
的数组类被称之为
ndarray
, 我们也将它叫做
array
. 需要注意的是,numpy.array与python标准库中的array.array是有区别的,后者仅处理一维数组并且只提供了少量的功能. 对于ndarray对象而言,比较重要的属性有:

ndarray.ndim


数组中轴(维度)的个数,在python的世界里,维度的个数是指秩


ndarray.shape


数组的维度. 这是一个表示数组在每一个维度上的大小的一个整数元组. 对于一个n行m列的矩阵而言,它的shape属性就为(n, m). 那么,这个元组的长度就必然为秩,或者为维度的个数,或为ndim属性


ndarray.size


数组中元素的总个数. 也就等于shape属性元组中各个元素的乘积.


ndarray.dtype


一个用来描述数组中元素类型的对象. 你能通过标准的python类型来创建或者直接指定dtype属性. 另外numpy也提供了它自己的数据类型. 例如,numpy.int32, numpy.int16, 以及numpy.float64, 等等.


ndarray.itemsize


数组中元素的字节大小(bytes). 例如,一个类型为float64的数组元素的itemsize为8(=64/8), 而一个类型为complex32的数组元素的itemsize为4(=32/8). 这个属性等价于ndarray.dtype.itemsize


ndarray.data


包含了实际数组元素的缓冲区. 通常我们不需要用这个属性,原因是,我们会用索引(功能)访问数组中元素.


……

Pandas

import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import warnings

warnings.filterwarnings('ignore')
plt.rcParams['font.sans-serif'] = ['SimHei']  # 用来正常显示中文标签
plt.rcParams['axes.unicode_minus'] = False

def test1():
# for each Series, it includes `index`, so merging them into `DataFrame`, as corresponding index-value into a row
# print(pd.Index([3]*4))
# print(pd.Index(range(4)))
# print(pd.date_range('20180201', periods=4))  # DatetimeIndex, default 'D' (calendar daily), `stride` as daily
# print(pd.period_range('20180101', '2018-01-04'))  # PeriodIndex
# print(pd.Index(data=[i for i in 'ABCDEF']))
# print(list(pd.RangeIndex(10)))
# s = pd.Series(10)  # scalar
# s = pd.Series(data=[1, 2, 3], index=[10, 20, 20]) # array-like, and non-unique index values are allowed
s = pd.Series({'a': 10, 10: 'AA'}, index=['aa', 10])  # dict
print(s)  # print(s[:])

# df = pd.DataFrame(data=np.random.randn(4, 3), index=pd.RangeIndex(1, 5), columns=['A', 'B', 'C'])  # ndarray
# df = pd.DataFrame(data={'A': np.array(range(1, 4))**2, 'B': pd.Timestamp('20180206'),
#                         'C': pd.Series(data=['MLee', 'python', 'Pearson']), 'D': 126,
#                         'E': pd.Categorical(values=['Upper', 'Middle', 'Lower'], categories=['Middle', 'Lower']),
#                         'F': 'Laplace'}, index=pd.RangeIndex(3), columns=['A', 'B'])  # dict
df = pd.DataFrame(data={'A': np.array(range(1, 4))**2, 'B': pd.Timestamp('20180206'),
'C': pd.Series(data=['MLee', 'python', 'Pearson']), 'D': [126, 10, 66],
'E': pd.Categorical(values=['Upper', 'Middle', 'Lower'], categories=['Middle', 'Lower']),
'F': 'Laplace'}, index=pd.RangeIndex(3), columns=pd.Index([i for i in 'FEDCBA']))  # dict
print(df)
# print(df.dtypes)
# print(df.index)
# print(df.columns)
# print(df.values)  # numpy.ndarray
# print('*'*126)
# print(df.info())
# print('*'*126)
# print(df.describe())
# print(df.transpose())
# print(df.sort_index(axis=0, ascending=False))
# print(df.sort_index(axis=1))
# print(df.sort_values(by='D'))

print('*'*126)
df = pd.DataFrame(data=np.arange(24).reshape(6, 4), index=pd.date_range('20180201', periods=6),
columns=pd.Index([i for i in 'ABCD']))
# df = pd.DataFrame(data=np.arange(24).reshape(6, 4))
# print(df[0])
# print(df[0:1])  # select rows require to use the slice
print(df[:])
print(df['A'])  # print(df.A)
print(df[0:2][['A', 'B']])
print(df['20180201':'20180202'][['A', 'B']])
# select rows require to use the `slice`, while select columns require to use the `list`
# print(df[['A', 'B']])
# print(df[0:2])  # exclude 3th row
# print(df['20180201':'20180203'])  # include index `20180203` row
# print(df[0:1])
# print(df.loc['20180201']) # enable to get only one
# For row & column, df.loc requires to use `index` and `column`, while df.iloc requires to use `slice` and `slice`,
# in particular, df.ix supports mixed-selection
print(df.loc['20180201':'20180202'][['A', 'B']])
print(df.loc['20180201':'20180202', ['A', 'B']])  # print(df['20180201':'20180202', ['A', 'B']])  # error
# equivalent to df.iloc[0, 0]
print(df.iloc[0:1, 0:1])  # use index(both row and column, necessarily all like 0, 1, 2, ...)
print(df.iloc[[0, 2, 4], 0:2])
print(df.ix[0:2, 0:2])
# print(df.ix['20180201':'20180202', 0:2])
# print(df.ix[0:2, ['A', 'B']])
# print(df.ix['20180201':'20180202', ['A', 'B']])
print('**')
print(df['B'][df.A>4])  # print(df.B[df.A>4])
df.B[df.A > 4] = np.nan
print(df)
df['E'] = 0
print(df)
df['F'] = pd.Series(data=range(6), index=pd.date_range('20180201', periods=6))
print(df)

print('*'*12)
df = pd.DataFrame(data=np.arange(24).reshape(6, 4), index=pd.date_range('20180201', periods=6), columns=pd.Index([i for i in 'ABCD']))
# print(df)
# df.dropna()
# df.fillna()

def test2():
dataset_training = pd.read_csv('C:/users/myPC/Desktop/ml/Titanic/train.csv')
# print(dataset_training)
print(dataset_training.Survived.value_counts())
deceased = dataset_training.Pclass[dataset_training.Survived == 0].value_counts(sort=True)
survived = dataset_training.Pclass[dataset_training.Survived == 1].value_counts(sort=True)
# print(deceased, survived, sep='\n')
df = pd.DataFrame({'Survived': survived, 'Deceased': deceased})
print(df)
df.plot(kind='bar', stacked=True)
plt.title('Distribution of SES')
plt.xlabel('Class')
plt.ylabel('Numbers')
plt.show()

def test3():
id = ['1001', '1008', '1102', '1001', '1003', '1101', '1126', '1007']
name = ['Shannon', 'Gauss', 'Newton', 'Leibniz', 'Taylor', 'Lagrange', 'Laplace', 'Fourier']
country = ['America', 'Germany', 'Britain', 'Germany', 'Britain', 'France', 'France', 'France']
iq = [168, 180, 172, 228, 182, 172, 160, 186]
sq = [180, 194, 160, 274, 150, 200, 158, 180]
eq = [144, 152, 134, 166, 118, 144, 156, 128]
dataset = list(zip(id, name, country, iq, sq, eq))
df = pd.DataFrame(data=dataset, columns=['Id', 'Name', 'Country', 'IQ', 'SQ', 'EQ'])
df.to_csv('persons.csv', index=True, header=True)
df = pd.read_csv('persons.csv', usecols=range(1, 7))
print(df)
# print(df.info())
# print(df[df.IQ == df.IQ.max()])
print(df.sort_values(by='IQ', axis=0, ascending=False))  # df.head(1)
plt.subplot2grid((1, 3), (0, 0))
df.IQ.plot()
df.SQ.plot()
df.EQ.plot()
for i in range(df.shape[0]):
plt.annotate(s=df.ix[i, 'Name'], xy=(i, df.ix[i, 'IQ']), xytext=(1, 1), xycoords='data', textcoords='offset points')
plt.subplot2grid((1, 3), (0, 1), colspan=2)
df[['IQ', 'SQ', 'EQ']].plot(kind='bar')
# df['IQ'].plot(kind='bar')
# df['SQ'].plot(kind='bar')
# df['EQ'].plot(kind='bar')
for i in range(df.shape[0]):
plt.annotate(s=df.ix[i, 'Name'], xy=(i, df.ix[i, 'IQ']), xytext=(1, 1), xycoords='data', textcoords='offset points')
plt.show()

def test4():
data_train = pd.read_csv(r"C:\Users\myPC\Desktop\ml\Titanic\train.csv")
# plt.subplot2grid((2, 3), (0, 0))  # 在一张大图里分列几个小图

survived = data_train.Pclass[data_train.Survived == 1].value_counts()
deceased = data_train.Pclass[data_train.Survived == 0].value_counts()
pd.DataFrame({'Survived': survived, 'deceased': deceased}).plot(kind='bar', stacked=True)

# print(data_train.Sex[data_train.Survived == 1].value_counts())

print(data_train.groupby(by='Survived').count())

# data_train.Survived.value_counts().plot(kind='bar')  # 柱状图
# plt.title("获救情况 (1为获救)")
# plt.ylabel("人数")

# plt.subplot2grid((2, 3), (0, 1))
# data_train.Pclass.value_counts().plot(kind="bar")
# plt.ylabel("人数")
# plt.title("乘客等级分布")
#
# plt.subplot2grid((2, 3), (0, 2))
# plt.scatter(data_train.Survived, data_train.Age)
# plt.ylabel("年龄")  # 设定纵坐标名称
# plt.grid(b=True, which='major', axis='y')
# plt.title("按年龄看获救分布 (1为获救)")x
#
# plt.subplot2grid((2, 3), (1, 0), colspan=2)
# data_train.Age[data_train.Pclass == 1].plot(kind='kde')
# data_train.Age[data_train.Pclass == 2].plot(kind='kde')
# data_train.Age[data_train.Pclass == 3].plot(kind='kde')
# plt.xlabel("年龄")  # plots an axis lable
# plt.ylabel("密度")
# plt.title("各等级的乘客年龄分布")
# plt.legend(('头等舱', '2等舱', '3等舱'), loc='best')  # sets our legend for our graph.
#
# plt.subplot2grid((2, 3), (1, 2))
# data_train.Embarked.value_counts().plot(kind='bar')
# plt.title("各登船口岸上船人数")
# plt.ylabel("人数")
plt.show()

def test5():
url = r'http://s3.amazonaws.com/assets.datacamp.com/course/dasi/present.txt'
present = pd.read_table(url, sep=' ')
# print(present)
# present.set_index(keys=['year'], inplace=True)
# print(present)
print(present.columns)
print(present.index)
print(present.dtypes)
# present.boys.plot(kind='kde')
# present.girls.plot(kind='kde')
present.set_index(keys=['year'], inplace=True)
kinds = ['line', 'bar', 'barh', 'hist', 'box', 'kde', 'density', 'area', 'pie', 'scatter', 'hexbin']
# plt.figure()
# for i in range(len(kinds)):
#     plt.subplot2grid(shape=(2, 3), loc=(i//3, i % 3))
#     present[:10].plot(kind=kinds[i], subplots=True)
present[:].plot(x='boys', y='girls', kind=kinds[-1])
plt.legend(loc='upper right')
plt.show()

def test6():
s = pd.Series(data=np.random.randn(1000), index=pd.date_range('20180101', periods=1000))
print(s)
s = np.exp(s.cumsum())
s.plot(style='m*', logy=True)
plt.show()

def test7():
df = pd.read_csv('http://archive.ics.uci.edu/ml/machine-learning-databases/iris/iris.data',
names=['sepal_length', 'sepal_width', 'petal_length', 'petal_width', 'name'])
# print(df)
# df.boxplot(by='name')
# df.plot(kind='kde')
# df.ix[:, :-1].plot(kind='hist')
setosa = df[df.name == 'Iris-setosa']
versicolor = df[df.name == 'Iris-versicolor']
virginica = df[df.name == 'Iris-virginica']
# plt.subplot2grid(shape=(1, 3), loc=(0, 0))
plt.subplot(131)
pd.DataFrame.plot(setosa)
# setosa.plot(title='setosa', subplots=True)
# plt.subplot2grid(shape=(1, 3), loc=(0, 1))
# versicolor.plot(title='versicolor', subplots=True)
# pd.DataFrame.plot(versicolor)
# plt.subplot2grid(shape=(1, 3), loc=(0, 2))
# virginica.plot(title='virginica', subplots=True)
# pd.DataFrame.plot(data=virginica)
plt.show()
# df.sepal_length.plot(kind='hist')
# plt.show()

def test8():
import numpy as np
import pandas as pd
from sklearn.ensemble import RandomForestRegressor
from sklearn import preprocessing
from sklearn import linear_model

dataset_training = pd.read_csv('C:/users/myPC/Desktop/ml/Titanic/train.csv')
dataset_test = pd.read_csv('C:/users/myPC/Desktop/ml/Titanic/test.csv')
passenger_id = dataset_test['PassengerId']
# for the feature `Fare` in the `test_data`, only one missed
dataset_test.loc[dataset_test.Fare.isnull(), 'Fare'] = 0.0
# drop the irrelevant features
dataset_training.drop(labels=['PassengerId', 'Name', 'Ticket'], axis=1, inplace=True)
dataset_test.drop(columns=['PassengerId', 'Name', 'Ticket'], inplace=True)
# predict `age` which is missed by others' features
dataset_training_age = dataset_training[['Pclass', 'SibSp', 'Parch', 'Fare', 'Age']]
dataset_test_age = dataset_test[['Pclass', 'SibSp', 'Parch', 'Fare', 'Age']]
age_known0 = dataset_training_age[dataset_training_age.Age.notnull()].as_matrix()  # get the `ndarray`
age_unknown0 = np.array(dataset_training_age[dataset_training_age.Age.isnull()])
age_unknown1 = dataset_test_age[dataset_test_age.Age.isnull()].as_matrix()
training_data_age = age_known0[:, :-1]
training_target_age = age_known0[:, -1]
rfr = RandomForestRegressor(n_estimators=1000, n_jobs=-1, random_state=0)  # enable to fit them by the 1000 trees
rfr.fit(training_data_age, training_target_age)
predicts = rfr.predict(age_unknown0[:, :-1])
dataset_training.ix[dataset_training.Age.isnull(), 'Age'] = predicts  # fill the `age` which is missed
# fit model(RandomForestRegressor) by the `training data`
dataset_test.loc[dataset_test.Age.isnull(), 'Age'] = rfr.predict(age_unknown1[:, :-1])
dataset_training.ix[dataset_training.Cabin.notnull(), 'Cabin'] = 'Yes'  # fill the `Cabin` as `Yes` which `notnull`
dataset_training.ix[dataset_training.Cabin.isnull(), 'Cabin'] = 'No'  # else, `No`
dataset_test.ix[dataset_test.Cabin.notnull(), 'Cabin'] = 'Yes'
dataset_test.ix[dataset_test.Cabin.isnull(), 'Cabin'] = 'No'
# dummy some fields whose types of [`object`, `category`] to eliminate relation between categories
dataset_training_dummies = pd.get_dummies(dataset_training, columns=['Pclass', 'Sex', 'Cabin', 'Embarked'])
dataset_test_dummies = pd.get_dummies(dataset_test, columns=['Pclass', 'Sex', 'Cabin', 'Embarked'])
ss = preprocessing.StandardScaler()  # standardize some features which have some differences
dataset_training_dummies['Age'] = ss.fit_transform(dataset_training_dummies.Age.reshape(-1, 1))
dataset_training_dummies['Fare'] = ss.fit_transform(dataset_training_dummies.Fare.reshape(-1, 1))
dataset_test_dummies['Age'] = ss.fit_transform(dataset_test_dummies.Age.reshape(-1, 1))
dataset_test_dummies['Fare'] = ss.fit_transform(dataset_test_dummies.Fare.reshape(-1, 1))
# get all processed samples
print(dataset_training_dummies)
dataset_training_dummies = dataset_training_dummies.filter(regex='Age|SibSp|Parch|Fare|Pclass_*|Sex_*|Cabin_*|Embarked_*|Survived').as_matrix()
# print(data_training_dummies.info())
training_data = dataset_training_dummies[:, 1:]
training_target = dataset_training_dummies[:, 0:1]
lr = linear_model.LogisticRegression(C=1.0, penalty='l1', tol=1e-5)
from sklearn import model_selection

print(model_selection.cross_val_score(lr, training_data, training_target, cv=4))
lr.fit(training_data, training_target)
predicts = lr.predict(dataset_test_dummies)
ans = pd.DataFrame({'PassengerId': passenger_id, 'Survived': predicts.astype(np.int32)})
# print(ans)
# ans.to_csv('C:/users/myPC/Desktop/ml/Titanic/submission.csv', index=False)  # ignore label-index
# print(pd.DataFrame({'features': list(dataset_test_dummies[1:]), 'coef': list(lr.coef_.T)}))

def test9():
import numpy as np
import numpy.linalg as nla
import scipy.linalg as sla

a = np.random.randint(20, size=(3, 4))
print(a)
print(np.diag(a))
U, Sigma, V_H = nla.svd(a)  # 其中U, V为酉矩阵,Sigma为一个由奇异值组成的对角矩阵(但返回的是一个由奇异值组成的向量形式)
Sigma = np.concatenate((np.diag(Sigma), np.zeros((U.shape[0], V_H.shape[1]-U.shape[1]))), axis=1)
print(U)
print(Sigma)
print(V_H)
print(U.dot(Sigma.dot(V_H)))

def google():
import tensorflow as tf

a = tf.constant((1, 1))
b = tf.constant((2, 2))
ans = a + b
sess = tf.Session()
# print(type(sess.run(ans)))
print(sess.run(ans))

if __name__ == '__main__':
# test9()
# google()
test8()
# pd.concat()
# df.drop()
# df = pd.DataFrame({'A': [1, 2, 3], 'B': [np.nan, 1, np.nan], 'C': [10, 111, 1111], 'D': ['good', 'common', 'bad']})
# df.loc[df.B.notnull(), 'B'] = 'Yes'  # prior to judge `isnull()`, or leading to all values as the `Yes`
# df.loc[df.B.isnull(), 'B'] = 'No'
# print(df.ix[:, 'B'])
# df.ix[df.B.isnull(), 'B'] = [0, 0]
# print(pd.get_dummies(df, columns=['D', 'B']))
# print(df)
# print(df.filter(regex='A|D|B'))
# df = pd.get_dummies(df, prefix=['M', 'L'])  # loss original data(variable)
# print(df)
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签:  numpy datasci 数据分析