Artificial Neural Networks Solved MCQs


SET-1 (Characteristics)

1. What are the issues on which biological networks proves to be superior than AI networks?
a) robustness & fault tolerance
b) flexibility
c) collective computation
d) all of the mentioned
 
Answer: d
Explanation: AI network should be all of the above mentioned.
2. The fundamental unit of network is
a) brain
b) nucleus
c) neuron
d) axon
 
Answer: c
Explanation: Neuron is the most basic & fundamental unit of a network .
3. What are dendrites?
a) fibers of nerves
b) nuclear projections
c) other name for nucleus
d) none of the mentioned
 
Answer: a
Explanation: Dendrites tree shaped fibers of nerves.
4. What is shape of dendrites like
a) oval
b) round
c) tree
d) rectangular
 
Answer: c
Explanation: Basic biological q&a.
5. Signal transmission at synapse is a?
a) physical process
b) chemical process
c) physical & chemical both
d) none of the mentioned
 
Answer: b
Explanation: Since chemicals are involved at synapse , so its an chemical process.
6. How does the transmission/pulse acknowledged ?
a) by lowering electric potential of neuron body
b) by raising electric potential of neuron body
c) both by lowering & raising electric potential
d) none of the mentioned
 
Answer: c
Explanation: There is equal probability of both.
7. When the cell is said to be fired?
a) if potential of body reaches a steady threshold values
b) if there is impulse reaction
c) during upbeat of heart
d) none of the mentioned
 
Answer: a
Explanation: Cell is said to be fired if & only if potential of body reaches a certain steady threshold values.
8. Where does the chemical reactions take place in neuron?
a) dendrites
b) axon
c) synapses
d) nucleus
 
Answer: c
Explanation: It is a simple biological fact.
9. Function of dendrites is?
a) receptors
b) transmitter
c) both receptor & transmitter
d) none of the mentioned
 
Answer: a
Explanation: Dendrites are tree like projections whose function is only to receive impulse.
10. What is purpose of Axon?
a) receptors
b) transmitter
c) transmission
d) none of the mentioned
 
Answer: c
Explanation: Axon is the body of neuron & thus cant be at ends of it so cant receive & transmit signals.



SET-2 (Characteristics)



1. What is approx size of neuron body(in micrometer)?
a) below 5
b) 5-10
c) 10-80
d) above 100

Answer: c
Explanation: Average size of neuron body lies in the above limit.

2. What is the gap at synapses(in nanometer)?
a) 50
b) 100
c) 150
d) 200

Answer: d
Explanation: It is near to 200nm.

3. What is charge at protoplasm in state of inactivity?
a) positive
b) negative
c) neutral
d) may be positive or negative

Answer: b
Explanation: It is due to the presence of potassium ion on outer surface in neural fluid.

4. What is the main constituent of neural liquid?
a) sodium
b) potassium
c) Iron
d) none of the mentioned

Answer: a
Explanation: Potassium is the main constituent of neural liquid & responsible for potential on neuron body.

5. What is average potential of neural liquid in inactive state?
a) +70mv
b) +35mv
c) -35mv
d) -70mv

Answer: d
Explanation: It is a basic fact, founded out by series of experiments conducted by neural scientist.

6. At what potential does cell membrane looses it impermeability against Na+ ions?
a) -50mv
b) -35mv
c) -60mv
d) -65mv

Answer: c
Explanation: Cell membrane looses it impermeability against Na+ ions at -60mv.

7. What is effect on neuron as a whole when its potential get raised to -60mv?
a) it get fired
b) no effect
c) it get compressed
d) it expands

Answer: a
Explanation: Cell membrane looses it impermeability against Na+ ions at -60mv.

8. The membrane which allows neural liquid to flow will?
a) never be imperturbable to neural liquid
b) regenerate & retain its original capacity
c) only the certain part get affected, while rest becomes imperturbable again
d) none of the mentioned

Answer: b
Explanation: Each cell of human body(internal) has regenerative capacity.

9. How fast is propagation of discharge signal in cells of human brain?
a) less than 0.1m/s
b) 0.5-2m/s
c) 2-5m/s
d) 5-10m/s

Answer: b
Explanation: The process is very fast but comparable to the length of neuron.

10. What is the function of neurotransmitter ?
a) they transmit data directly at synapse to other neuron
b) they modify conductance of post synaptic membrane for certain ions
c) cause polarisation or depolarisation
d) both polarisation & modify conductance of membrane

Answer: d
Explanation: Excitatory & inhibilatory activities are result of these two process.


SET-3 (Characteristics)



1. The cell body of neuron can be analogous to what mathamatical operation?
a) summing
b) differentiator
c) integrator
d) none of the mentioned

Answer: a
Explanation: Because adding of potential(due to neural fluid) at different parts of neuron is the reason of its firing.

2. What is the critical threshold voltage value at which neuron get fired?
a) 30mv
b) 20mv
c) 25mv
d) 10mv

Answer: d
Explanation: This critical is founded by series of experiments conducted by neural scientist.

3. Does there is any effect on particular neuron which got repeatedly fired ?
a) yes
b) no

Answer: a
Explanation: The strength of neuron to fire in future increases.

4. What is name of above mechanism?
a) hebb rule learning
b) error correction learning
c) memory based learning
d) none of the mentioned

Answer: a
Explanation: It follows from basic definition of hebb rule learning.

5. What is hebb’s rule of learning
a) the system learns from its past mistakes
b) the system recalls previous reference inputs & respective ideal outputs
c) the strength of neural connection get modified accordingly
d) none of the mentioned

Answer:c
Explanation: The strength of neuron to fire in future increases, if it is fired repeatedly.

6. Are all neuron in brain are of same type?
a) yes
b) no

Answer: b
Explanation: Follows from the fact no two body cells are exactly similar in human body, even if they belong to same class.

7. What is estimate number of neurons in human cortex?
a) 108
b) 105
c) 1011
d) 1020

Answer: c
Explanation: It is a fact !

8. what is estimated density of neuron per mm^2 of cortex?
a) 15*(102)
b) 15*(104)
c) 15*(103)
d) 5*(104)

Answer: b
Explanation: It is a biological fact !

9. Why can’t we design a perfect neural network?
a) full operation is still not known of biological neurons
b) number of neuron is itself not precisely known
c) number of interconnection is very large & is very complex
d) all of the mentioned

Answer: d
Explanation: These are all fundamental reasons, why can’t we design a perfect neural network !

10. How many synaptic connection are there in human brain?
a) 1010
b) 1015
c) 1020
d) 105

Answer: b
Explanation: You can estimate this value from number of neurons in human cortex & their density.


SET-4(History)


1. Operations in the neural networks can perform what kind of operations?
a) serial
b) parallel
c) serial or parallel
d) none of the mentioned

Answer: c
Explanation: General characteristics of neural networks.

2. Does the argument information in brain is adaptable, whereas in the computer it is replaceable is valid?
a) yes
b) no

Answer: a
Explanation: Its a fact & related to basic knowledge of neural networks !

3. Does there exist central control for processing information in brain as in computer?
a) yes
b) no

Answer: b
Explanation: In human brain information is locally processed & analysed.

4. Which action is faster pattern classification or adjustment of weights in neural nets?
a) pattern classification
b) adjustment of weights
c) equal
d) either of them can be fast, depending on conditions

Answer: a
Explanation: Memory is addressable, so thus pattern can be easily classified.

5. What is the feature of ANNs due to which they can deal with noisy, fuzzy, inconsistent data?
a) associative nature of networks
b) distributive nature of networks
c) both associative & distributive
d) none of the mentioned

Answer: c
Explanation: General characteristics of ANNs.

6. What was the name of the first model which can perform wieghted sum of inputs?
a) McCulloch-pitts neuron model
b) Marvin Minsky neuron model
c) Hopfield model of neuron
d) none of the mentioned

Answer: a
Explanation: McCulloch-pitts neuron model can perform weighted sum of inputs followed by threshold logic operation.

7. Who developed the first learning machine in which connection strengths could be adapted automatically?
a) McCulloch-pitts
b) Marvin Minsky
c) Hopfield
d) none of the mentioned

Answer: b
Explanation: In 1954 Marvin Minsky developed the first learning machine in which connection strengths could be adapted automatically & efficiebtly.
advertisement

8. Who proposed the first perceptron model in 1958?
a) McCulloch-pitts
b) Marvin Minsky
c) Hopfield
d) Rosenblatt

Answer: d
Explanation: Rosenblatt proposed the first perceptron model in 1958 .

9. John hopfield was credited for what important aspec of neuron?
a) learning algorithms
b) adaptive signal processing
c) energy analysis
d) none of the mentioned

Answer: c
Explanation: It was of major contribution of his works in 1982.

10. What is the contribution of Ackley, Hinton in neural?
a) perceptron
b) boltzman machine
c) learning algorithms
d) none of the mentioned

Answer: b
Explanation: Ackley, Hinton built the boltzman machine.


SET-5(Terminology)

1. What is ART in neural networks?
a) automatic resonance theory
b) artificial resonance theory
c) adaptive resonance theory
d) none of the mentioned

Answer: c
Explanation: It is full form of ART & is basic q&a.

2. What is an activation value?
a) weighted sum of inputs
b) threshold value
c) main input to neuron
d) none of the mentioned

Answer: a
Explanation: It is definition of activation value & is basic q&a.

3. Positive sign of weight indicates?
a) excitatory input
b) inhibitory input
c) can be either excitatory or inhibitory as such
d) none of the mentioned

Answer: a
Explanation: Sign convention of neuron.

4. Negative sign of weight indicates?
a) excitatory input
b) inhibitory input
c) excitatory output
d) inhibitory output

Answer: b
Explanation: Sign convention of neuron.

5. The amount of output of one unit received by another unit depends on what?
a) output unit
b) input unit
c) activation value
d) weight

Answer: d
Explanation: Activation is sum of wieghted sum of inputs, which gives desired output..hence output depends on weights.

6. The process of adjusting the weight is known as?
a) activation
b) synchronisation
c) learning
d) none of the mentioned

Answer: c
Explanation: Basic definition of learning in neural nets .

7. The procedure to incrementally update each of weights in neural is referred to as?
a) synchronisation
b) learning law
c) learning algorithm
d) both learning algorithm & law

Answer: d
Explanation: Basic definition of learning law in neural.

8. In what ways can output be determined from activation value?
a) deterministically
b) stochastically
c) both deterministically & stochastically
d) none of the mentioned

Answer: c
Explanation: This is the most important trait of input processing & output determination in neural networks.

9. How can output be updated in neural network?
a) synchronously
b) asynchronously
c) both synchronously & asynchronously
d) none of the mentioned

Answer: c
Explanation: Output can be updated at same time or at different time in the networks.

10. What is asynchronous update in neural netwks?
a) output units are updated sequentially
b) output units are updated in parallel fashion
c) can be either sequentially or in parallel fashion
d) none of the mentioned

Answer: a
Explanation: Output are updated at different time in the networks.


SET-6(Model 1)



1. What is the name of the model in figure below?

neural-networks-questions-answers-models-1-q1
a) Rosenblatt perceptron model
b) McCulloch-pitts model
c) Widrow’s Adaline model
d) None of the mentioned

Answer: b
Explanation: It is a general block diagram of McCulloch-pitts model of neuron.

2. What is nature of function F(x) in the figure?
a) linear
b) non-linear
c) can be either linear or non-linear
d) none of the mentioned

Answer: b
Explanation: In this function, the independent variable is an exponent in the equation hence non-linear.

3. What does the character ‘b’ represents in the above diagram?
a) bias
b) any constant value
c) a variable value
d) none of the mentioned

Answer: a
Explanation: More appropriate choice since bias is a constant fixed value for any circuit model.

4. If ‘b’ in the figure below is the bias, then what logic circuit does it represents?

neural-networks-questions-answers-models-1-q4
a) or gate
b) and gate
c) nor gate
d) nand gate

Answer: c
Explanation: Form the truth table of above figure by taking inputs as 0 or 1.

5. When both inputs are 1, what will be the output of the above figure?
a) 0
b) 1
c) either 0 or 1
d) z

Answer: a
Explanation: Check the truth table of nor gate.

6. When both inputs are different, what will be the output of the above figure?
a) 0
b) 1
c) either 0 or 1
d) z

Answer: a
Explanation: Check the truth table of nor gate.

7. Which of the following model has ability to learn?
a) pitts model
b) rosenblatt perceptron model
c) both rosenblatt and pitts model
d) neither rosenblatt nor pitts

Answer: b
Explanation: Weights are fixed in pitts model but adjustable in rosenblatt.

8. When both inputs are 1, what will be the output of the pitts model nand gate ?
a) 0
b) 1
c) either 0 or 1
d) z

Answer: a
Explanation: Check the truth table of simply a nand gate.

9. When both inputs are different, what will be the logical output of the figure of question 4?
a) 0
b) 1
c) either 0 or 1
d) z

Answer: a
Explanation: Check the truth table of nor gate.

10. Does McCulloch-pitts model have ability of learning?
a) yes
b) no

Answer: b
Explanation: Weights are fixed.



SET-7(Model 2)


1. Who invented perceptron neural networks?
a) McCullocch-pitts
b) Widrow
c) Minsky & papert
d) Rosenblatt

Answer: d
Explanation: The perceptron is one of the earliest neural networks. Invented at the Cornell Aeronautical Laboratory in 1957 by Frank Rosenblatt, the Perceptron was an attempt to understand human memory, learning, and cognitive processes.

2. What was the 2nd stage in perceptron model called?
a) sensory units
b) summing unit
c) association unit
d) output unit

Answer: c
Explanation: This was the very speciality of the perceptron model, that is performs association mapping on outputs of he sensory units.

3. What was the main deviation in perceptron model from that of MP model?
a) more inputs can be incorporated
b) learning enabled
c) all of the mentioned
d) none of the mentioned

Answer: b
Explanation: The weights in perceprton model are adjustable.

4. What is delta (error) in perceptron model of neuron?
a) error due to environmental condition
b) difference between desired & target output
c) can be both due to difference in target output or environmental condition
d) none of the mentioned

Answer: a
Explanation: All other parameters are assumed to be null while calculatin the error in perceptron model & only difference between desired & target output is taken into account.

5. If a(i) is the input, ^ is the error, n is the learning parameter, then how can weight change in a perceptron model be represented?
a) na(i)
b) n^
c) ^a(i)
d) none of the mentioned

Answer: d
Explanation: The correct answer is n^a(i).

6. What is adaline in neural networks?
a) adaptive linear element
b) automatic linear element
c) adaptive line element
d) none of the mentioned

Answer: a
Explanation: adaptive linear element is the full form of adaline neural model.

7. who invented the adaline neural model?
a) Rosenblatt
b) Hopfield
c) Werbos
d) Widrow

Answer: d
Explanation: Widrow invented the adaline neural model.

8. What was the main point of difference between the adaline & perceptron model?
a) weights are compared with output
b) sensory units result is compared with output
c) analog activation value is compared with output
d) all of the mentioned

Answer: c
Explanation: Analog activation value comparison with output,instead of desired output as in perceptron model was the main point of difference between the adaline & perceptron model.

9. In adaline model what is the relation between output & activation value(x)?
a) linear
b) nonlinear
c) can be either linear or non-linear
d) none of the mentioned

Answer: a
Explanation: s,output=f(x)=x. Hence its a linear model.

10. what is the another name of weight update rule in adaline model based on its functionality?
a) LMS error learning law
b) gradient descent algorithm
c) both LMS error & gradient descent learning law
d) none of the mentioned

Answer: c
Explanation: weight update rule minimizes the mean squared error(delta square), averaged over all inputs & this laws is derived using negative gradient of error surface weight space, hence option a & b.




SET-8(Topology)



1. In neural how can connectons between different layers be achieved?
a) interlayer
b) intralayer
c) both interlayer and intralayer
d) either interlayer or intralayer

Answer: c
Explanation: Connections between layers can be made to one unit to another and within the units of a layer.

2. Connections across the layers in standard topologies & among the units within a layer can be organised?
a) in feedforward manner
b) in feedback manner
c) both feedforward & feedback
d) either feedforward & feedback

Answer: d
Explanation: Connections across the layers in standard topologies can be in feedforward manner or in feedback manner but not both.

3. What is an instar topology?
a) when input is given to layer F1, the the jth(say) unit of other layer F2 will be activated to maximum extent
b) when weight vector for connections from jth unit (say) in F2 approaches the activity pattern in F1(comprises of input vector)
c) can be either way
d) none of the mentioned

Answer: a
Explanation: Restatement of basic definition of instar.

4. What is an outstar topology?
a) when input is given to layer F1, the the jth(say) unit of other layer F2 will be activated to maximum extent
b) when weight vector for connections from jth unit (say) in F2 approaches the activity pattern in F1(comprises of input vector)
c) can be either way
d) none of the mentioned

Answer: b
Explanation: Restatement of basic definition of outstar.

5. The operation of instar can be viewed as?
a) content addressing the memory
b) memory addressing the content
c) either content addressing or memory addressing
d) both content & memory addressing

Answer: a
Explanation: Because in instar, when input is given to layer F1, the the jth(say) unit of other layer F2 will be activated to maximum extent.

6. The operation of outstar can be viewed as?
a) content addressing the memory
b) memory addressing the content
c) either content addressing or memory addressing
d) both content & memory addressing

Answer: b
Explanation: Because in outstar, when weight vector for connections from jth unit (say) in F2 approaches the activity pattern in F1(comprises of input vector).

7. If two layers coincide & weights are symmetric(wij=wji), then what is that structure called?
a) instar
b) outstar
c) autoassociative memory
d) heteroassociative memory

Answer: c
Explanation: In autoassociative memory each unit is connected to every other unit & to itself.

8. Heteroassociative memory can be an example of which type of network?
a) group of instars
b) group of oustar
c) either group of instars or outstars
d) both group of instars or outstars

Answer: c
Explanation: Depending upon the flow, the memory can be of either of the type.

9. What is STM in neural network?
a) short topology memory
b) stimulated topology memory
c) short term memory
d) none of the mentioned

Answer: c
Explanation: Full form of STM.

10. What does STM corresponds to?
a) activation state of network
b) encoded pattern information pattern in synaptic weights
c) either way
d) both way

Answer: a
Explanation: Short-term memory (STM) refers to the capacity-limited retention of information over a brief period of time,hence the option.
advertisement

11. What LTM corresponds to?
a) activation state of network
b) encoded pattern information pattern in synaptic weights
c) either way
d) both way

Answer: b
Explanation: Long-term memory (LTM-the encoding and retention of an effectively unlimited amount of information for a much longer period of time) & hence the option.



SET-9(Learning 1)






1. On what parameters can change in weight vector depend?
a) learning parameters
b) input vector
c) learning signal
d) all of the mentioned

Answer: d
Explanation: Change in weight vector corresponding to jth input at time (t+1) depends on all of these parameters.
2. If the change in weight vector is represented by ∆wij, what does it mean?
a) describes the change in weight vector for ith processing unit, taking input vector jth into account
b) describes the change in weight vector for jth processing unit, taking input vector ith into account
c) describes the change in weight vector for jth & ith processing unit.
d) none of the mentioned

Answer: a
Explanation: ∆wij= µf(wi a)aj, where a is the input vector.
3. What is learning signal in this equation ∆wij= µf(wi a)aj?
a) µ
b) wi a
c) aj
d) f(wi a)

Answer: d
Explanation: This the non linear representation of output of the network.
4. State whether Hebb’s law is supervised learning or of unsupervised type?
a) supervised
b) unsupervised
c) either supervised or unsupervised
d) can be both supervised & unsupervised

Answer: b
Explanation: No desired output is required for it’s implementation.
5. Hebb’s law can be represented by equation?
a) ∆wij= µf(wi a)aj
b) ∆wij= µ(si) aj, where (si) is output signal of ith input
c) both way
d) none of the mentioned

Answer: c
Explanation: (si)= f(wi a), in Hebb’s law.
6. State which of the following statements hold foe perceptron learning law?
a) it is supervised type of learning law
b) it requires desired output for each input
c) ∆wij= µ(bi – si) aj
d) all of the mentioned

Answer: d
Explanation: all statements follow from ∆wij= µ(bi – si) aj, where bi is the target output & hence supervised learning.
7. Delta learning is of unsupervised type?
a) yes
b) no

Answer: b
Explanation: Change in weight is based on the error between the desired & the actual output values for a given input.
8. widrow & hoff learning law is special case of?
a) hebb learning law
b) perceptron learning law
c) delta learning law
d) none of the mentioned

Answer: c
Explanation: Output function in this law is assumed to be linear , all other things same.
9. What’s the other name of widrow & hoff learning law?
a) Hebb
b) LMS
c) MMS
d) None of the mentioned

Answer: b
Explanation: LMS, least mean square. Change in weight is made proportional to negative gradient of error & due to linearity of output function.
10. Which of the following equation represent perceptron learning law?
a) ∆wij= µ(si) aj
b) ∆wij= µ(bi – si) aj
c) ∆wij= µ(bi – si) aj Á(xi),wher Á(xi) is derivative of xi
d) ∆wij= µ(bi – (wi a)) aj

Answer: b
Explanation: Perceptron learning law is supervised, nonlinear type of learning.



SET-10 (Learning 2)

1. Correlation learning law is special case of?
a) Hebb learning law
b) Perceptron learning law
c) Delta learning law
d) LMS learning law

Answer: a
Explanation: Since in hebb is replaced by bi(target output) in correlation.

2. Correlation learning law is what type of learning?
a) supervised
b) unsupervised
c) either supervised or unsupervised
d) both supervised or unsupervised

Answer: a
Explanation: Supervised, since depends on target output.

3. Correlation learning law can be represented by equation?
a) ∆wij= µ(si) aj
b) ∆wij= µ(bi – si) aj
c) ∆wij= µ(bi – si) aj Á(xi),where Á(xi) is derivative of xi
d) ∆wij= µ bi aj

Answer: d
Explanation: Correlation learning law depends on target output(bi).

4. The other name for instar learning law?
a) looser take it all
b) winner take it all
c) winner give it all
d) looser give it all

Answer: b
Explanation: The unit which gives maximum output, weight is adjusted for that unit.

5. The instar learning law can be represented by equation?
a) ∆wij= µ(si) aj
b) ∆wij= µ(bi – si) aj
c) ∆wij= µ(bi – si) aj Á(xi),where Á(xi) is derivative of xi
d) ∆wk= µ (a-wk), unit k with maximum output is identified

Answer: d
Explanation: Follows from basic definition of instar learning law.

6. Is instar a case of supervised learning?
a) yes
b) no

Answer: b
Explanation: Since weight adjustment don’t depend on target output, it is unsupervised learning.

7. The instar learning law can be represented by equation?
a) ∆wjk= µ(bj – wjk), where the kth unit is the only active in the input layer
b) ∆wij= µ(bi – si) aj
c) ∆wij= µ(bi – si) aj Á(xi),wher Á(xi) is derivative of xi
d) ∆wij= µ(si) aj

Answer: a
Explanation: Follows from basic definition of outstar learning law.

8. Is outstar a case of supervised learning?
a) yes
b) no

Answer: a
Explanation: Since weight adjustment depend on target output, it is supervised learning.

9. Which of the following learning laws belongs to same category of learning?
a) hebbian, perceptron
b) perceptron, delta
c) hebbian, widrow-hoff
d) instar, outstar

Answer: b
Explanation: They both belongs to supervised type learning.

10. In hebbian learning intial weights are set?
a) random
b) near to zero
c) near to target value
d) near to target value

Answer: b
Explanation: Hebb law lead to sum of correlations between input & output, inorder to achieve this, the starting initial weight values must be small.

10 Comments

  1. Can you sent me more number of mcqs on soft computing techniques topic, and suggest me a textbook on this topic, which must contain mcqs. Tq

    ReplyDelete
    Replies
    1. Artificial Neural Networks Solved Mcqs >>>>> Download Now

      >>>>> Download Full

      Artificial Neural Networks Solved Mcqs >>>>> Download LINK

      >>>>> Download Now

      Artificial Neural Networks Solved Mcqs >>>>> Download Full

      >>>>> Download LINK zW

      Delete
  2. Thanks for sharing the valuable info. I really appreciate your efforts and I will be waiting for your further write ups thanks once again. Visit the link for Supervised Weight Loss Program.

    ReplyDelete
  3. Itbis really a awesome info.it clear all the query related to the subjects and get a proper hint from the perticar subjects.thank you

    ReplyDelete
  4. Artificial Neural Networks Solved Mcqs >>>>> Download Now

    >>>>> Download Full

    Artificial Neural Networks Solved Mcqs >>>>> Download LINK

    >>>>> Download Now

    Artificial Neural Networks Solved Mcqs >>>>> Download Full

    >>>>> Download LINK

    ReplyDelete
  5. Nritocaka-Albuquerque Raquel Salazar Here
    bloodafualrum

    ReplyDelete

Post a Comment

Previous Post Next Post