1. Kozma et al. present a contemporary mathematical model of human behavior under some environmental constraints. How well does their model fit the human performance data? Is their solution algorithm blind overall to error reduction between input and associated output states or is it based on propagation or another Hebbian learning model?
2. A person sees a barely visible human as he/she is approaching an isolated ranch house at twilight. How does the nature of declarative memory and the possibility of its use in parallel distributed processing (PDP) regarding formation affect the perception of this complex and potentially threatening scene. How can PDP augment memory in evaluating the potential risk in this and other potentially threatening situations?© BrainMass Inc. brainmass.com September 25, 2018, 9:00 am ad1c9bdddf - https://brainmass.com/computer-science/sorting/simulating-neurons-539133
Kozma et al. present a contemporary mathematical model of human behavior under some environmental constraints. How well does their model fit the human performance data? Is their solution algorithm blind overall to error reduction between input and associated output states or is it based on propagation or another Hebbian learning model?
The basic principle here is that any state of one's mind is understood, at its basis, as a set of complex combinations of values (vectors) at any given time. This refers to the condition of neurons and their affect on their neighbors. The inputs in any given test of learning has to have enough qualities so as to make it a challenge, but not too many so as to make it too broad for any kind of "supervised" training to work. The environment here is controlled. Anything outside of the environment is just background noise. Hebb is far more predictable than Kozma in terms of neuronal patters and "habits." (also see Friedenberg, 1996 page 152-155)
For one thing, Kozma's approach stresses the existence of non-determination. The Human Performance concept of AGI does not seem to be able to deal with this effectively. This is not the same thing as the problems of "noise." Kozma is trying to find some freedom from the genetic structure as something totally determined. Yet, there is no reason to believe that "noise" is not also structural, just so minutely so that it cannot, as yet, be systematically measured. The point is that any stochastic model only assumes that there is a "random" element. The question is how does AGI create randomness? The closet that they get is merely saying that one's world, that environment in which one acts, is not entirely predictable. Of course it is in theory, but there is no way to perform the calculations that quickly, and there is no reason to doubt freedom. The concept of "noise" is arbitrary (Segaran, 2007).
Kozma refers to his algorithm as a "mechanism" which my definition is not random. Algorithms are not random by their very structure. Yet, they can model, by inference, any missing data that makes sense. Really, the input object has to be controlled and presented to the learner in a way that makes it clear what qualities are important and what are not. In addition, any algorithm in machine learning only deals with outputs anyway. If the system were closed, then a purely determined system could be figured. But in life, there are always things that we cannot, of ourselves, determine. This means ...
A reduction between input and associated output states are determined.