Legacy Forum: Preserving Nearly 20 Years of Community History - A Time Capsule of Discussions, Memories, and Shared Experiences.

AI

Hitec robotics including ROBONOVA humanoid, HSR-8498HB servos, MR C-3024 Controllers and RoboBasic
64 postsPage 1 of 51, 2, 3, 4, 5
64 postsPage 1 of 51, 2, 3, 4, 5

AI

Post by Humanoido » Fri Apr 27, 2007 7:34 am

Post by Humanoido
Fri Apr 27, 2007 7:34 am

I'm working on another program that can write new sections of code automatically for AI. Has anyone tried this? I wrote a program that was intelligent enough to automatically generate CNC programs based on a generalized directive, and so thought I'd try this out on my robot. The trick is getting it to run in real time and re-execute sections of code.

Does anyone have any ideas for running AI on RN? What is your approach and what do you wish to accomplish?

humanoido
I'm working on another program that can write new sections of code automatically for AI. Has anyone tried this? I wrote a program that was intelligent enough to automatically generate CNC programs based on a generalized directive, and so thought I'd try this out on my robot. The trick is getting it to run in real time and re-execute sections of code.

Does anyone have any ideas for running AI on RN? What is your approach and what do you wish to accomplish?

humanoido
Humanoido
Savvy Roboteer
Savvy Roboteer
User avatar
Posts: 574
Joined: Tue Dec 05, 2006 1:00 am
Location: Deep in the Heart of Asia

Post by DirtyRoboto » Fri Apr 27, 2007 5:36 pm

Post by DirtyRoboto
Fri Apr 27, 2007 5:36 pm

I like to use Virus models. I am looking at a few viri codes that can replicate with modifications in the offspring. I think along the lines of a good virus code placed inside a real world body.
The virus itself can carry all sorts of payload so please dont think of the as bad things, just bad publicity and computer crashes have made them a bad thing.

I am looking into getting my RN bluetooth'd up for this reason. I want to set up a virus over bluetooth. One cell of the virus in my PC and the twin cell is in the RN1. The cell in the RN1 is specifically tailored to gather sensor data and to execute motion control. The cell in the PC can send RN1 sensor data to an external math module to compute and return the result. It can also send motion control data based on calculations made by a powerful PC.
The two cells would live a realtime lifespan and then the virus DNA would be passed to a next generation cell group.

Marcus.
I like to use Virus models. I am looking at a few viri codes that can replicate with modifications in the offspring. I think along the lines of a good virus code placed inside a real world body.
The virus itself can carry all sorts of payload so please dont think of the as bad things, just bad publicity and computer crashes have made them a bad thing.

I am looking into getting my RN bluetooth'd up for this reason. I want to set up a virus over bluetooth. One cell of the virus in my PC and the twin cell is in the RN1. The cell in the RN1 is specifically tailored to gather sensor data and to execute motion control. The cell in the PC can send RN1 sensor data to an external math module to compute and return the result. It can also send motion control data based on calculations made by a powerful PC.
The two cells would live a realtime lifespan and then the virus DNA would be passed to a next generation cell group.

Marcus.
In servo's we trust!
DirtyRoboto
Savvy Roboteer
Savvy Roboteer
User avatar
Posts: 412
Joined: Tue Sep 19, 2006 1:00 am
Location: London

Post by chully00 » Fri Apr 27, 2007 7:05 pm

Post by chully00
Fri Apr 27, 2007 7:05 pm

It seems like the limitations of the MR-3024 and roboBASIC would be the main trouble spots. From the way I understand your goal, I don't think RoboBASIC would give you the power you need to reprogram the robonova on the fly.
If you insist upon using the RN and stuff, I suggest that you:
  • Write a computer program to generate RoboBASIC code
  • Use two serial ports on the computer: one for reprogramming the RN just like roboBASIC does, and one for communicating useful data back to the computer (to be used in generating the next gen program, for instance)
  • Write a template program in RoboBASIC to make your RN run around and do whatever artificial-life-type-stuff you want, with plenty of subroutines that your computer program can tweak and reprogram the RN with.

Dunno if that helps :?

The trick is getting it to run in real time and re-execute sections of code.

Getting code to run real-time could be hard in RoboBASIC, again. Any complicated calculation would have to be moved to a non-time-critical place, and that's about all there is to it, IMO. Real creatures don't know differential equations though, so just keep the result of complicated calculations on the robonova, and the formula on the computer for tweaking. Re-executing sections of code is easy though -- try GOTO :wink:
I'd say to do any AI on the robonova, you need to rip off that MR-3024 and put it in the parts bin. Get a gumstix or, really, anything that can do higher level programming. sorry. :(
It seems like the limitations of the MR-3024 and roboBASIC would be the main trouble spots. From the way I understand your goal, I don't think RoboBASIC would give you the power you need to reprogram the robonova on the fly.
If you insist upon using the RN and stuff, I suggest that you:
  • Write a computer program to generate RoboBASIC code
  • Use two serial ports on the computer: one for reprogramming the RN just like roboBASIC does, and one for communicating useful data back to the computer (to be used in generating the next gen program, for instance)
  • Write a template program in RoboBASIC to make your RN run around and do whatever artificial-life-type-stuff you want, with plenty of subroutines that your computer program can tweak and reprogram the RN with.

Dunno if that helps :?

The trick is getting it to run in real time and re-execute sections of code.

Getting code to run real-time could be hard in RoboBASIC, again. Any complicated calculation would have to be moved to a non-time-critical place, and that's about all there is to it, IMO. Real creatures don't know differential equations though, so just keep the result of complicated calculations on the robonova, and the formula on the computer for tweaking. Re-executing sections of code is easy though -- try GOTO :wink:
I'd say to do any AI on the robonova, you need to rip off that MR-3024 and put it in the parts bin. Get a gumstix or, really, anything that can do higher level programming. sorry. :(
chully00
Robot Builder
Robot Builder
User avatar
Posts: 7
Joined: Tue Apr 03, 2007 6:39 am

Post by DirtyRoboto » Fri Apr 27, 2007 8:08 pm

Post by DirtyRoboto
Fri Apr 27, 2007 8:08 pm

My friend chully00. The MR-3024 as you call it, is to be understood by its application. It is the CNS for the servo platform we call RN1. It has no brain. It is just the spinal function for low level control of an high level system.

In one hour I can make an RB program to run an autonomous goal check routine that can address an external array for storage of data to be processed by external math modules tailored to RN1. The results are converted into poses and sent to the RN1.

Marcus
My friend chully00. The MR-3024 as you call it, is to be understood by its application. It is the CNS for the servo platform we call RN1. It has no brain. It is just the spinal function for low level control of an high level system.

In one hour I can make an RB program to run an autonomous goal check routine that can address an external array for storage of data to be processed by external math modules tailored to RN1. The results are converted into poses and sent to the RN1.

Marcus
In servo's we trust!
DirtyRoboto
Savvy Roboteer
Savvy Roboteer
User avatar
Posts: 412
Joined: Tue Sep 19, 2006 1:00 am
Location: London

Post by chully00 » Fri Apr 27, 2007 9:00 pm

Post by chully00
Fri Apr 27, 2007 9:00 pm

The results are converted into poses and sent to the RN1.


I want to do this too! Then, as you say, the 3024 can become the spinal function for the low level control of the RN1s servos. I don't know how the poses would be sent to the RN1 though. I have written a program to go to different subroutines(poses) based on keyboard input through the ETX and ERX ports. Are you saying there is a way of sending coordinates to the RN1 so that it can move its limbs in an arbitrary fashion?

At best, I feel like this would be hard to do on the 3024 in RoboBasic. I assume you would send a control code describing where to move which servos. This makes the program control easy at some level: RN1 gets control code from computer, then RN1 gets sensor input, then RN1 sends sensor readings as acknowledgement to computer, and finally RN1 moves servos to pose described by control code. If this is silly let me now, because it is the way I assume it would be done.

What really gets me is that I want to move the robonova to ANY position. (My AI dream for the RN1 is for the RN1 to explore its range of static stability/balance to create its own poses) I think that to move to any pose you would have to implement a vast decision tree. If the control codes I mentioned earlier were used then you would have to have an ON ... GOTO ... statement, and subroutines something like this:
Code: Select all
servo0-000:
  move g24, 0, , , , , , , , , , , , , , , , , , , , , , ,
  return
servo0-005:
  move g24, 5, , , , , , , , , , , , , , , , , , , , , , ,
  return
servo0-010:
  move g24, 10, , , , , , , , , , , , , , , , , , , , , , ,
  return
...
...
servo0-255:
  move g24, 255, , , , , , , , , , , , , , , , , , , , , , ,
  return

Code: Select all
servo1-000:
  move g24, , 0, , , , , , , , , , , , , , , , , , , , , ,
  return
servo1-005:
  move g24, , 5, , , , , , , , , , , , , , , , , , , , , ,
  return
...
...
servo1-255:
  move g24, , 255 , , , , , , , , , , , , , , , , , , , , , ,
  return


It just doesn't seem like a good way to do it to me. I wonder what I'm not seeing. :?
The results are converted into poses and sent to the RN1.


I want to do this too! Then, as you say, the 3024 can become the spinal function for the low level control of the RN1s servos. I don't know how the poses would be sent to the RN1 though. I have written a program to go to different subroutines(poses) based on keyboard input through the ETX and ERX ports. Are you saying there is a way of sending coordinates to the RN1 so that it can move its limbs in an arbitrary fashion?

At best, I feel like this would be hard to do on the 3024 in RoboBasic. I assume you would send a control code describing where to move which servos. This makes the program control easy at some level: RN1 gets control code from computer, then RN1 gets sensor input, then RN1 sends sensor readings as acknowledgement to computer, and finally RN1 moves servos to pose described by control code. If this is silly let me now, because it is the way I assume it would be done.

What really gets me is that I want to move the robonova to ANY position. (My AI dream for the RN1 is for the RN1 to explore its range of static stability/balance to create its own poses) I think that to move to any pose you would have to implement a vast decision tree. If the control codes I mentioned earlier were used then you would have to have an ON ... GOTO ... statement, and subroutines something like this:
Code: Select all
servo0-000:
  move g24, 0, , , , , , , , , , , , , , , , , , , , , , ,
  return
servo0-005:
  move g24, 5, , , , , , , , , , , , , , , , , , , , , , ,
  return
servo0-010:
  move g24, 10, , , , , , , , , , , , , , , , , , , , , , ,
  return
...
...
servo0-255:
  move g24, 255, , , , , , , , , , , , , , , , , , , , , , ,
  return

Code: Select all
servo1-000:
  move g24, , 0, , , , , , , , , , , , , , , , , , , , , ,
  return
servo1-005:
  move g24, , 5, , , , , , , , , , , , , , , , , , , , , ,
  return
...
...
servo1-255:
  move g24, , 255 , , , , , , , , , , , , , , , , , , , , , ,
  return


It just doesn't seem like a good way to do it to me. I wonder what I'm not seeing. :?
chully00
Robot Builder
Robot Builder
User avatar
Posts: 7
Joined: Tue Apr 03, 2007 6:39 am

Post by DirtyRoboto » Fri Apr 27, 2007 9:31 pm

Post by DirtyRoboto
Fri Apr 27, 2007 9:31 pm

The RoboBasic interface offers spinal programming. The controller offers spinal programming.
Spinal programming is the set of poses or moves executed within RN1. You could say "The autonomous functions", executed in response to higher functioning computations governing the given system.

The higher functioning computations should be performed offboard, against an simulation and the results are sent back as servo positions. The RN1 is an output/input device, an extension of the computer into our life. The question is. How can I provide the humanoid with enough sense to succeed

Marcus.
The RoboBasic interface offers spinal programming. The controller offers spinal programming.
Spinal programming is the set of poses or moves executed within RN1. You could say "The autonomous functions", executed in response to higher functioning computations governing the given system.

The higher functioning computations should be performed offboard, against an simulation and the results are sent back as servo positions. The RN1 is an output/input device, an extension of the computer into our life. The question is. How can I provide the humanoid with enough sense to succeed

Marcus.
In servo's we trust!
DirtyRoboto
Savvy Roboteer
Savvy Roboteer
User avatar
Posts: 412
Joined: Tue Sep 19, 2006 1:00 am
Location: London

Post by chully00 » Fri Apr 27, 2007 10:29 pm

Post by chully00
Fri Apr 27, 2007 10:29 pm

DirtyRoboto wrote:The question is. How can I provide the humanoid with enough sense to succeed


True, that. Good AI is needed to make the RN1 really great, in my opinion.
DirtyRoboto wrote:The question is. How can I provide the humanoid with enough sense to succeed


True, that. Good AI is needed to make the RN1 really great, in my opinion.
chully00
Robot Builder
Robot Builder
User avatar
Posts: 7
Joined: Tue Apr 03, 2007 6:39 am

Post by Modereso » Sat Apr 28, 2007 12:57 am

Post by Modereso
Sat Apr 28, 2007 12:57 am

DirtyRoboto wrote:The RoboBasic interface offers spinal programming. The controller offers spinal programming.
Spinal programming is the set of poses or moves executed within RN1. You could say "The autonomous functions", executed in response to higher functioning computations governing the given system.

The higher functioning computations should be performed offboard, against an simulation and the results are sent back as servo positions. The RN1 is an output/input device, an extension of the computer into our life. The question is. How can I provide the humanoid with enough sense to succeed

Marcus.


A good, well mapped, logical possibility there. I planned a similar approach, by brainstorming a scheme which uses the same kind of mechanism, that is, relaying information to and from external hardware(s) which in turn, manage the allocation of both lower & higher level code execution. This 'ANN collaboration Backbone' - will decide and act upon data in real time.

If you could spontaneously abstract sensory data from the bot, and have an act upon that data to call upon 'servo motion/sequences', which, in theory, will be sent back to the bot, then it's possible that you could set up some kind of autonomous real time AI behavior. Although, our ideas may differ a little. But, sure, the potential to create a basic network of intelligence is a good possibility. I'm sure that there is a way to kit the RN out with some decent sensory, and be able to transmit & manipulate the data from that sensory, both remotely & locally with a return mechanism included.

Another cool thing I will be looking into when I get my RN, is AI within motion / recognition. I don't want to get into too much detail at the moment, but I have an unused 'small form' digital camera sat here just waiting to be hacked. Finally, another solution I thought of (to overcome possible I/O module limitation) is just to stay clear of the main controller board. Instead, find a way to get data from sensory/modules (installed within the bot) using some kind of separate wireless connectivity - but I'll have to look into that. Even so, you can still have the abstraction to act upon that data once it reaches the backbone - which in turn, fires up the servo/motion sequence. As for code, well, I agree that for higher stuff... :wink:

But then saying that, RoboBasic gets the job done when it comes to control/manipulation of the actual hardware.
DirtyRoboto wrote:The RoboBasic interface offers spinal programming. The controller offers spinal programming.
Spinal programming is the set of poses or moves executed within RN1. You could say "The autonomous functions", executed in response to higher functioning computations governing the given system.

The higher functioning computations should be performed offboard, against an simulation and the results are sent back as servo positions. The RN1 is an output/input device, an extension of the computer into our life. The question is. How can I provide the humanoid with enough sense to succeed

Marcus.


A good, well mapped, logical possibility there. I planned a similar approach, by brainstorming a scheme which uses the same kind of mechanism, that is, relaying information to and from external hardware(s) which in turn, manage the allocation of both lower & higher level code execution. This 'ANN collaboration Backbone' - will decide and act upon data in real time.

If you could spontaneously abstract sensory data from the bot, and have an act upon that data to call upon 'servo motion/sequences', which, in theory, will be sent back to the bot, then it's possible that you could set up some kind of autonomous real time AI behavior. Although, our ideas may differ a little. But, sure, the potential to create a basic network of intelligence is a good possibility. I'm sure that there is a way to kit the RN out with some decent sensory, and be able to transmit & manipulate the data from that sensory, both remotely & locally with a return mechanism included.

Another cool thing I will be looking into when I get my RN, is AI within motion / recognition. I don't want to get into too much detail at the moment, but I have an unused 'small form' digital camera sat here just waiting to be hacked. Finally, another solution I thought of (to overcome possible I/O module limitation) is just to stay clear of the main controller board. Instead, find a way to get data from sensory/modules (installed within the bot) using some kind of separate wireless connectivity - but I'll have to look into that. Even so, you can still have the abstraction to act upon that data once it reaches the backbone - which in turn, fires up the servo/motion sequence. As for code, well, I agree that for higher stuff... :wink:

But then saying that, RoboBasic gets the job done when it comes to control/manipulation of the actual hardware.
Modereso
Savvy Roboteer
Savvy Roboteer
Posts: 37
Joined: Mon Mar 26, 2007 9:08 pm

Post by Robo1 » Sat Apr 28, 2007 12:13 pm

Post by Robo1
Sat Apr 28, 2007 12:13 pm

Some problems I can see with what your talking about.

If you want the PC to be the brain and make all the dicision in real time your going to come a cropper do to latency. But the general ideas you have have been try I research for years.

DirtyRoboto if you what to know more about your idea on vorus, in the research community it is refered to as evolutionary algorithm where you create a number of DNA strands this could be variable or the raw 0's and 1's then you find the best 2 or so strands and cross them using ever mutation, taking bits from one and placing them in the other. there's a couple of more techniques you could use. the only problem you're have in the RN is working out the reward value e.g. how do you rate the different strands and how do you code the strands. you could have the for example for the forward walking gait where you mutate the values for the walking code .e.g.

you brack the cycle in to 10 steps and you get the program to change and try different numbers for the servos for the 10 steps until you find the best set of values. but here you have the problem of quantifing how good the now gait is compred to the old one.

I'm current writting a paper on it and will post a draft on it on this site in a couple of weeks.

But this is still an open field no one has found the definate answer, that's why there's lots of uni's around the world working on the problem.

if you seacr some like "biped gait generation" in google scholar you will find lots of papers written about this field.

I will post a good paper written about this using a robo one style robot.

Bren
Some problems I can see with what your talking about.

If you want the PC to be the brain and make all the dicision in real time your going to come a cropper do to latency. But the general ideas you have have been try I research for years.

DirtyRoboto if you what to know more about your idea on vorus, in the research community it is refered to as evolutionary algorithm where you create a number of DNA strands this could be variable or the raw 0's and 1's then you find the best 2 or so strands and cross them using ever mutation, taking bits from one and placing them in the other. there's a couple of more techniques you could use. the only problem you're have in the RN is working out the reward value e.g. how do you rate the different strands and how do you code the strands. you could have the for example for the forward walking gait where you mutate the values for the walking code .e.g.

you brack the cycle in to 10 steps and you get the program to change and try different numbers for the servos for the 10 steps until you find the best set of values. but here you have the problem of quantifing how good the now gait is compred to the old one.

I'm current writting a paper on it and will post a draft on it on this site in a couple of weeks.

But this is still an open field no one has found the definate answer, that's why there's lots of uni's around the world working on the problem.

if you seacr some like "biped gait generation" in google scholar you will find lots of papers written about this field.

I will post a good paper written about this using a robo one style robot.

Bren
Robo1
Savvy Roboteer
Savvy Roboteer
Posts: 501
Joined: Fri Jun 30, 2006 1:00 am
Location: UK - Bristol

Post by Modereso » Sat Apr 28, 2007 2:57 pm

Post by Modereso
Sat Apr 28, 2007 2:57 pm

Yes, this won't be easy by far. However, an ANN & communication relay can be developed, providing you have a way to relay the data efficiently. Below, I have documented a somewhat large, but hopefully useful post if you like this kind of thing. If not, don't read it =]

This document will attempt to explain a little about operant conditioning. Personally, I think that operant conditioning is a solid way to model the basis of AI & ANN. What I find interesting in particular, is how operant conditioning evolves & works within the Human brain, and how we can develop sub networks of ability based on initial probability & judgment. Many people find the process of studying the Human brain tedious and boring, since the end goal is to develop something which can function alone. But actually, if we can examine the way our own brain works, we can then develop an understanding of how the operant process could be useful within AI.

No matter who you are, you will use operant conditioning. Yet we never stop to think of how amazing the process is. We condition ourselves to the point that, it just becomes perfectly normal to act like this if x = 1 or y = 2. I would like to study more in the path of operant AI. I think it is a good fundamental aspect. A few sub routines of operant conditioning might utilize things like memory, sensory, understanding, judgment, emotion & outcome. In Humans, it is thought that the process is run within an organ called the amygdala. It is not that clear on how much the amygdala has to play within the process, but it looks useful in regulating or directing judgment, fear, memories & emotions. The way the amygdala is structured (for instance), is such that it becomes a switch, communicating with biological neurons. We need to keep these neural networks up to date somehow, and operant conditioning is a way to do this. Otherwise, we would forget what we had developed and gradually, the neural link would become 'detached'.

A good example to use is if you are walking down a street. If you see danger in the corner of that street, you would act upon the danger depending on how you perceived it. The next time you have to walk down the same street, you would recall the past event & possibly avoid the corner the danger was in. That is a sub form of operant ability (you are training your brain on how to act in any given situation or way).

Learning to drive a car is a classic example of operant conditioning; Over time, you would learn what to do and what not to do. How to drive, and how not to drive etc.

I hope to of outlined some basic uses for operant ability models within ANN and AI. Understanding other sub routines of operant ability is required to understand how, exactly, we condition, memorize and act. So, now the question. Why would operant conditioning be useful within AI? Firstly, it is important to understand that operant conditioning & ability is a highly powerful tool to utilize. It is how we inherit and carry evolution. It is structured, logical, and well maintained. This incredibly powerful process can offer the formation and pillars of AI. You would need to structure it in such a way, that data goes in & comes out - according to the operant and sub routines. Number switching is a good way to call upon operant routines, where ANN would hold and regulate the operant sub routines, and the formation would hold the actual process.
Yes, this won't be easy by far. However, an ANN & communication relay can be developed, providing you have a way to relay the data efficiently. Below, I have documented a somewhat large, but hopefully useful post if you like this kind of thing. If not, don't read it =]

This document will attempt to explain a little about operant conditioning. Personally, I think that operant conditioning is a solid way to model the basis of AI & ANN. What I find interesting in particular, is how operant conditioning evolves & works within the Human brain, and how we can develop sub networks of ability based on initial probability & judgment. Many people find the process of studying the Human brain tedious and boring, since the end goal is to develop something which can function alone. But actually, if we can examine the way our own brain works, we can then develop an understanding of how the operant process could be useful within AI.

No matter who you are, you will use operant conditioning. Yet we never stop to think of how amazing the process is. We condition ourselves to the point that, it just becomes perfectly normal to act like this if x = 1 or y = 2. I would like to study more in the path of operant AI. I think it is a good fundamental aspect. A few sub routines of operant conditioning might utilize things like memory, sensory, understanding, judgment, emotion & outcome. In Humans, it is thought that the process is run within an organ called the amygdala. It is not that clear on how much the amygdala has to play within the process, but it looks useful in regulating or directing judgment, fear, memories & emotions. The way the amygdala is structured (for instance), is such that it becomes a switch, communicating with biological neurons. We need to keep these neural networks up to date somehow, and operant conditioning is a way to do this. Otherwise, we would forget what we had developed and gradually, the neural link would become 'detached'.

A good example to use is if you are walking down a street. If you see danger in the corner of that street, you would act upon the danger depending on how you perceived it. The next time you have to walk down the same street, you would recall the past event & possibly avoid the corner the danger was in. That is a sub form of operant ability (you are training your brain on how to act in any given situation or way).

Learning to drive a car is a classic example of operant conditioning; Over time, you would learn what to do and what not to do. How to drive, and how not to drive etc.

I hope to of outlined some basic uses for operant ability models within ANN and AI. Understanding other sub routines of operant ability is required to understand how, exactly, we condition, memorize and act. So, now the question. Why would operant conditioning be useful within AI? Firstly, it is important to understand that operant conditioning & ability is a highly powerful tool to utilize. It is how we inherit and carry evolution. It is structured, logical, and well maintained. This incredibly powerful process can offer the formation and pillars of AI. You would need to structure it in such a way, that data goes in & comes out - according to the operant and sub routines. Number switching is a good way to call upon operant routines, where ANN would hold and regulate the operant sub routines, and the formation would hold the actual process.
Modereso
Savvy Roboteer
Savvy Roboteer
Posts: 37
Joined: Mon Mar 26, 2007 9:08 pm

Post by Robo1 » Sat Apr 28, 2007 4:53 pm

Post by Robo1
Sat Apr 28, 2007 4:53 pm

hi

thanks for that Modereso, there done some cool things in the lab modeling the way the brain works and then implementing them in robots. there currently working with sheffield uni biology department modeling the brain of a rat.

http://www.ias.uwe.ac.uk/WhiskerBot/main.htm

this is also a good paper with some good insight in to the subject.

http://www4.cs.umanitoba.ca/~jacky/Robotics/Papers/wolff-walking-visual-feedback.pdf

Bren
hi

thanks for that Modereso, there done some cool things in the lab modeling the way the brain works and then implementing them in robots. there currently working with sheffield uni biology department modeling the brain of a rat.

http://www.ias.uwe.ac.uk/WhiskerBot/main.htm

this is also a good paper with some good insight in to the subject.

http://www4.cs.umanitoba.ca/~jacky/Robotics/Papers/wolff-walking-visual-feedback.pdf

Bren
Robo1
Savvy Roboteer
Savvy Roboteer
Posts: 501
Joined: Fri Jun 30, 2006 1:00 am
Location: UK - Bristol

Post by Modereso » Sat Apr 28, 2007 6:22 pm

Post by Modereso
Sat Apr 28, 2007 6:22 pm

Cool! Thanks for that, Bren! :)

I think developers & scientists are exploring ANN more now, as they understand the potential it can encapsulate. One thing which could obstruct the process, though, is ethnics. Slapping on future concerns about ethical issues may one day pose a limit as to how far we can take evolution & AI. Perhaps this is our dominant fight for survival as a Human race kicking in (and rightly so). But I would like to see 'the right for evolution' underlined in big bold letters. If it fails, we could rent out some island, and have all the AI we want ! (but now I'm just being silly and non realistic) :wink:

As you said, there are plenty of articles out there which make fantastic reading and new knowledge. I'll be sticking to the operant side of things for a long while, it seems.
Cool! Thanks for that, Bren! :)

I think developers & scientists are exploring ANN more now, as they understand the potential it can encapsulate. One thing which could obstruct the process, though, is ethnics. Slapping on future concerns about ethical issues may one day pose a limit as to how far we can take evolution & AI. Perhaps this is our dominant fight for survival as a Human race kicking in (and rightly so). But I would like to see 'the right for evolution' underlined in big bold letters. If it fails, we could rent out some island, and have all the AI we want ! (but now I'm just being silly and non realistic) :wink:

As you said, there are plenty of articles out there which make fantastic reading and new knowledge. I'll be sticking to the operant side of things for a long while, it seems.
Modereso
Savvy Roboteer
Savvy Roboteer
Posts: 37
Joined: Mon Mar 26, 2007 9:08 pm

Post by Humanoido » Sun Apr 29, 2007 2:21 am

Post by Humanoido
Sun Apr 29, 2007 2:21 am

Indeed this raises many issues, those of right to life, government legislation, evolution, slavery, racism, and the rights of a new life form. Science fiction has covered many of these issues already.

http://www.en.wikipedia.org/wiki/Data_(Star_Trek)/

Some scientists believe that possibly as soon as 20 years, the results from SETI could "discover a new life form."

http://www.en.wikipedia.org/wiki/SETI

Not to be too presumptuous, in my eyes, I believe mankind will create a new life form with the birth of intelligent self aware AI humanoid robots. Knowing it or not, we, with our Robonovas, are paving the way toward that result.

humanoido
Indeed this raises many issues, those of right to life, government legislation, evolution, slavery, racism, and the rights of a new life form. Science fiction has covered many of these issues already.

http://www.en.wikipedia.org/wiki/Data_(Star_Trek)/

Some scientists believe that possibly as soon as 20 years, the results from SETI could "discover a new life form."

http://www.en.wikipedia.org/wiki/SETI

Not to be too presumptuous, in my eyes, I believe mankind will create a new life form with the birth of intelligent self aware AI humanoid robots. Knowing it or not, we, with our Robonovas, are paving the way toward that result.

humanoido
Humanoido
Savvy Roboteer
Savvy Roboteer
User avatar
Posts: 574
Joined: Tue Dec 05, 2006 1:00 am
Location: Deep in the Heart of Asia

Post by Modereso » Sun Apr 29, 2007 1:02 pm

Post by Modereso
Sun Apr 29, 2007 1:02 pm

Hi, Humanoido.

I agree with your theory to full extent. However:

Human beings are the worst when it comes to ignoring morals and principles, already existing within the Human race.

Many are worried about the prospect of AI & robots taking over, but we have already accomplished this as Human beings. Most of the things raised have already been happening within our own race, for a long time. Years, even....

As long as this new evolution develops an understanding of what to do and what not to do (right and wrong), I really can't see a problem there, at present.
Hi, Humanoido.

I agree with your theory to full extent. However:

Human beings are the worst when it comes to ignoring morals and principles, already existing within the Human race.

Many are worried about the prospect of AI & robots taking over, but we have already accomplished this as Human beings. Most of the things raised have already been happening within our own race, for a long time. Years, even....

As long as this new evolution develops an understanding of what to do and what not to do (right and wrong), I really can't see a problem there, at present.
Modereso
Savvy Roboteer
Savvy Roboteer
Posts: 37
Joined: Mon Mar 26, 2007 9:08 pm

Post by Humanoido » Mon Apr 30, 2007 1:07 am

Post by Humanoido
Mon Apr 30, 2007 1:07 am

It would be my guess that the real danger is from some human putting a gun in the hands of a robot and programming it to go off on a rampage. There are threads of philosophy that state intelligence is not inherently evil, but the biproducts of society, while other threads state there are certain instinctive behavior traits that govern humans. Well, we all know that HAL in 2001 A Space Odyssey became psychotic because of how he was treated by humans. It will be interesting to see what real traits develop from pure AI.

This leads to my next question about AI. What is the key to self awareness? We can put lots of sensors on our RN, and interpret the results like this and that, but in reality, how do we make our bots totally self aware and capable of learning on their own? Are these two different issues or the same one?

humanoido
It would be my guess that the real danger is from some human putting a gun in the hands of a robot and programming it to go off on a rampage. There are threads of philosophy that state intelligence is not inherently evil, but the biproducts of society, while other threads state there are certain instinctive behavior traits that govern humans. Well, we all know that HAL in 2001 A Space Odyssey became psychotic because of how he was treated by humans. It will be interesting to see what real traits develop from pure AI.

This leads to my next question about AI. What is the key to self awareness? We can put lots of sensors on our RN, and interpret the results like this and that, but in reality, how do we make our bots totally self aware and capable of learning on their own? Are these two different issues or the same one?

humanoido
Humanoido
Savvy Roboteer
Savvy Roboteer
User avatar
Posts: 574
Joined: Tue Dec 05, 2006 1:00 am
Location: Deep in the Heart of Asia

Next
64 postsPage 1 of 51, 2, 3, 4, 5
64 postsPage 1 of 51, 2, 3, 4, 5