Issue
First of all I apologize for my lousy explanation, I really wanted to turn the thousands and hundreds of the DataFrame into tens as the stock price data was wrong on the csv. Finally I have managed to solve it in the Close column thanks to the response of @Vincent, although I think that it is still not the most orthodox and clean way. Thank you very much for responding.
Open High Low Close Adj Close \
Date
2014-10-31 25.350000 25.350000 25.350000 25.350000 24.343254
2015-03-31 27.299999 27.299999 27.299999 27.299999 26.215811
2015-04-30 28.020000 28.020000 28.020000 28.020000 26.907215
2015-06-30 27.230000 27.230000 27.230000 27.230000 26.148592
2015-07-31 29.030001 29.030001 29.030001 29.030001 27.877106
2015-09-30 23.059999 23.059999 23.059999 23.059999 22.144196
2015-11-30 20.889999 20.889999 20.889999 20.889999 20.060377
2016-02-29 16.780001 16.780001 16.780001 16.780001 16.113602
2016-03-31 15.600000 15.600000 15.600000 15.600000 14.980463
2016-05-31 17.070000 17.070000 17.070000 17.070000 16.392086
2016-06-30 16.540001 16.540001 16.540001 16.540001 15.883134
2016-08-31 17.969999 17.969999 17.969999 17.969999 17.256340
2016-09-30 17.030001 17.030001 17.030001 17.030001 16.353674
2016-10-31 16.250000 16.250000 16.250000 16.250000 15.604650
2016-11-30 18.129999 18.129999 18.129999 18.129999 17.409985
2017-01-31 18.150000 18.150000 18.150000 18.150000 17.429192
2017-02-28 18.250000 18.250000 18.250000 18.250000 17.525223
2017-03-10 970.000000 987.500000 970.000000 983.000000 943.961243
2017-03-13 983.000000 983.000000 983.000000 983.000000 943.961243
2017-03-14 983.000000 983.000000 983.000000 983.000000 943.961243
2017-03-15 983.000000 983.000000 983.000000 983.000000 943.961243
2017-03-16 983.000000 983.000000 983.000000 983.000000 943.961243
2017-03-17 983.000000 983.000000 983.000000 983.000000 943.961243
2017-03-20 983.000000 983.000000 983.000000 983.000000 943.961243
2017-03-21 983.000000 983.000000 983.000000 983.000000 943.961243
2017-03-22 983.000000 983.000000 983.000000 983.000000 943.961243
2017-03-23 983.000000 983.000000 983.000000 983.000000 943.961243
2017-03-24 983.000000 983.000000 983.000000 983.000000 943.961243
2017-03-27 983.000000 983.000000 983.000000 983.000000 943.961243
2017-03-28 983.000000 983.000000 983.000000 983.000000 943.961243
2017-03-29 983.000000 983.000000 983.000000 983.000000 943.961243
2017-03-30 983.000000 983.000000 983.000000 983.000000 943.961243
2017-03-31 983.000000 983.000000 983.000000 983.000000 943.961243
2017-04-03 983.000000 983.000000 983.000000 983.000000 943.961243
2017-04-04 983.000000 983.000000 983.000000 983.000000 943.961243
2017-04-05 983.000000 983.000000 983.000000 983.000000 943.961243
2017-04-06 983.000000 983.000000 983.000000 983.000000 943.961243
2017-04-07 983.000000 983.000000 983.000000 983.000000 943.961243
2017-04-10 983.000000 983.000000 983.000000 983.000000 943.961243
2017-04-11 983.000000 983.000000 983.000000 983.000000 943.961243
2017-04-12 983.000000 983.000000 983.000000 983.000000 943.961243
2017-04-13 983.000000 983.000000 983.000000 983.000000 943.961243
2017-04-18 983.000000 983.000000 983.000000 983.000000 943.961243
2017-04-19 983.000000 983.000000 983.000000 983.000000 943.961243
2017-04-20 983.000000 983.000000 983.000000 983.000000 943.961243
2017-04-21 983.000000 983.000000 983.000000 983.000000 943.961243
2017-04-24 983.000000 983.000000 983.000000 983.000000 943.961243
2017-04-25 983.000000 983.000000 983.000000 983.000000 943.961243
2017-04-26 983.000000 983.000000 983.000000 983.000000 943.961243
2017-04-27 983.000000 983.000000 983.000000 983.000000 943.961243
2017-04-28 983.000000 983.000000 983.000000 983.000000 943.961243
2017-05-02 1228.000000 1230.000000 1221.000000 1220.000000 1171.549072
2017-05-03 1215.000000 1225.500000 1213.000000 1221.000000 1172.509399
2017-05-04 1230.000000 1236.319946 1225.000000 1229.000000 1180.191650
2017-05-05 1233.000000 1233.000000 1213.719971 1214.000000 1165.787354
2017-05-08 1215.000000 1219.719971 1204.000000 1211.000000 1162.906494
This is my code:
df = pd.read_csv('psh.csv')
df.set_index('Date', inplace=True)
df.index = pd.to_datetime(df.index)
df.ffill(inplace=True)
close = []
for i in df['Close']:
if i > 100:
i = i/100
close.append(i)
df['Close'] = close
And now I have the Close column like I wanted:
Open High Low Close
Date
2014-10-31 25.35 25.350000 25.350000 25.35
2014-11-03 25.35 25.350000 25.350000 25.35
2014-11-04 25.35 25.350000 25.350000 25.35
2014-11-05 25.35 25.350000 25.350000 25.35
2014-11-06 25.35 25.350000 25.350000 25.35
... ... ... ... ... ... ...
2020-08-17 1948.00 1948.000000 1908.959961 19.30
2020-08-18 1924.00 1930.000000 1908.000000 19.20
2020-08-19 1916.00 1932.000000 1910.000000 19.32
2020-08-20 1912.00 1948.000000 1912.000000 19.30
2020-08-21 1930.00 1944.910034 1924.000000 19.42
Solution
Do you just want to modify the value of some of your data inside your file? I am no expert in CSV but from what I remember you are just manipulating a list of list isn't it?
So if your intent is to have 9.43 inside you last column instead of 943, i.e to divide all the results of you last column by 100 you could still try this I suppose:
import csv
f = open('yourfile.csv')
csv_f = csv.reader(f)
for row in csv_f:
row[5]=row[5]/100
If you want to also get rid of some of the decimals after the coma you can use "%.3f" % row[5]
to round correctly your data.
Is it what you were trying to do?
Answered By - Vincent Answer Checked By - Willingham (PHPFixing Volunteer)
0 Comments:
Post a Comment
Note: Only a member of this blog may post a comment.