So I haven’t posted in a few days, things have been completely hectic for me with all the new possibilities of data optimization and the new types of Neural Networks I’ve been using. To summarize what’s been going on with my research so far:
1. TP optimization with FFNN’s
Using 4 inputs (type of cross, MACD histogram value, MACD 9EMA value, and RSI 14) and one output (TP) in a Feed-Forward Neural Network has yielded an overall profit around 50% better than the regional optimal fixed TP, but less than the overall optimal fixed TP. This can be considered a good result given that a strategy using the overall optimal fixed TP relies on not more than 10% heavy hits that are not guaranteed to be sustained in every time span.
However, this result is still obtained with the MSE as a performance function, and without really smoothing the TP data. Next is to edit the MSE function to place more error weight on output higher than target, since those outputs result in a total miss even 1 point above target. Lowering the number of outputs above target will significantly lower the number of misses and therefore lead to a considerably greater performance that will be well above the overall optimal fixed TP. I will also attempt to map the set of outputs into a min-max region that is centered around either the overall optimal fixed TP or the lower local fixed TP, with a range of around 25% in each direction.
2. Hourly median price approximation with a NARX network
Performing one step ahead with a NARX network, with 4 inputs (Hourly median price, MACD histogram value, MACD 9EMA value, and RSI 14) and one output (Hourly median price) will approximate the next median price with an average goal error of 50%, but when plotted against a chart of high-low bars of the same prices, we find out that the 90-95% of the predicted prices fall within the next bar. This is phenomenal given that even with such random inputs that do not tell much about the circumstances of the trend, we can still give an answer that will have a 90-95% chance of being hit in the next hour. Shaping this method up could lead us to a result very close to a 100% hit, especially with a goal not targeted for over-fitting. Also, missed hits are pretty obvious to spot if you consider how outrageous it appears.
3. Clustering output with a Radial Based Probabilistic Neural Network
With a Radial Based Probabilistic Neural Network, one can classify the type of the next bar: an up, down, or flat bar, or type of candle (hammer, rising star, doji, etc..). I have not tried this yet, but it is very promising given the strength of PNN’s in classification, and the relatively high error tolerance (No need for 90% , 70% will still be highly profitable and well beyond any known trading system). I will try clustering soon with PNN’s and SOM’s (Self-Organizing Maps).
If any of the earlier 3 points seem extremely vague or not understandable, it’s perfectly OK. It will be much clearer when I get around to uploading charts and diagrams that will hopefully clear out the above-mentioned mumbo-jumbo.