Jump to content

Question for the experts !!


Yahya Nael

Recommended Posts

Yes, bio-mimicry is as old as the hills.

 

What is a camera if not mimicing an eyeball? What is a camera lens if not mimicking an eyeball lens?

 

Mimicry is probably not the best word for it. I'd coin the term: bio-inspiration. Or bio-convergence because often the inspiration isn't directly from biology but some brainwave that just happens to exhibit parallels with some biological solution. It is fascinating the way in which the universe solves problems all by itself.

 

But our thoughts are as much a part of the universe as anything else, so I wouldn't separate our thoughts from some problem the universe is otherwise attempting to solve.

 

I don't know that human consciousness contains any embedded knowledge on this front. Our consciousness is interconnected with the world at large. So our consciousness is not entirely human in that regard. By way of explanation I can see in front of me all sorts of things that are not particularly human in any way (a cow, a tree, etc.) yet these things populate my consciousness in no less an important way than anything particularly human in origin.

 

C

Link to comment
Share on other sites

 

So does the eye "jitter" by an angle that covers a fraction of a rod or cone...These are the thoughts that one will have if trying to relate the eye to a sensor.

 

It doesn't matter. We can solve by how much the jitter should be in terms of whatever materials we use. It doesn't have to be a carbon copy of some biological solution.

 

This train of thought doesn't need to be a precursor for a future in which movie cameras are made from growing biological brains with biological eyeballs.

 

C

Link to comment
Share on other sites

 

What is a camera if not mimicing an eyeball? ....

 

......I don't know that human consciousness contains any embedded knowledge on this front. Our consciousness is interconnected with the world at large. So our consciousness is not entirely human in that regard. By way of explanation I can see in front of me all sorts of things that are not particularly human in any way (a cow, a tree, etc.) yet these things populate my consciousness in no less an important way than anything particularly human in origin.....

 

 

I don't think any reasonable person who is familiar with the eye will consider that a camera or lens mimics the eye. It's like saying that an airplane mimics a bird. It's not true. Adopting a loose approach will compromise the science, compromise the the question.

 

Being human, or incarnate in any other way, we may observe that which we consider external to ourselves. As humans, or...., not yet expanded to a universal condition, what we experience is not to be confused with our own self. But somehow it is. At this level of experience the experiencer is identified with and bounded by the object of experience. We can say we are interconected with the world, it's true, but it's fair to say that at that moment, that (object from) the world has defined us. Reguardless of how numerous are the objects populating our awareness, they are all on the same level, like wallpaper, distracting a profound awareness in waiting. That awareness may choose to wait for some time, treading water. And this is the natural condition of human experience in the world that you and I share.

Link to comment
Share on other sites

I'm compelled to say,

one often reacts to the posts of others (reactively), but, reguardless of that, it takes courage to engage with the underlying issues. So thanks Carl, Denis, JEClark and anyone I missed. There's almost no one doing that.

 

Satsuki charmingly expressed that the thread had entered (got lost in) the long grass. Fair, but, if interested in the nature of film, what digital is and could be, where else to go.

Link to comment
Share on other sites

 

I don't think any reasonable person who is familiar with the eye will consider that a camera or lens mimics the eye. It's like saying that an airplane mimics a bird. It's not true. Adopting a loose approach will compromise the science, compromise the the question.

 

What?

 

Of course a camera is mimics the eyeball. Of course an airplane mimics birds.

 

That doesn't mean airplanes should have flapping wings and lay eggs, or cameras should be ball shaped, and shed tears.

 

It is not arbitrary that the term 'wing' used in relation to a bird, would find re-use in relation to the wings of aircraft.

 

The term 'aviation' is from the Latin avis, meaning "bird", and was coined in 1863 by French aviation pioneer Guillaume Joseph Gabriel de La Landelle (1812–1886) in "Aviation ou Navigation aérienne".

 

"The origin of mankind's desire to fly is lost in the distant past. From the earliest legends there have been stories of men strapping birdlike wings, stiffened cloaks or other devices to themselves and attempting to fly, typically by jumping off a tower. The Greek legend of Daedalus and Icarus is one of the earliest known, others originated from India, China and the European Dark Ages. During this early period the issues of lift, stability and control were not understood, and most attempts ended in serious injury or death." - http://en.wikipedia.org/wiki/History_of_aviation#Tower_jumping

 

The last line in this is insightful: "most attempts ended in serious injury or death".

 

This is an example of a very good reason why you would not do a carbon copy when otherwise inspired by nature's miracles.

 

C

 

Daedalus_und_Ikarus_MK1888.png

Link to comment
Share on other sites

If I find inspiration in the structure of a biological retina it's not because it's biological - it's because it has certain properties which solve the problem at hand.

 

Whether it solves the same problem in perception is an entirely different area of science. It's the same in computer software and the realm of artificial intelligence. A technique called neural networks was inspired by the way in which the brain works, but techniques better than neural networks have been found for similar problems previously solved by neural networks. Whether neural networks solve the same problems in biological systems is an entirely different area of science.

 

 

Wily-Coyote.jpg

 

C

Edited by Carl Looper
Link to comment
Share on other sites

 

What?

 

Of course a camera is mimics the eyeball. Of course an airplane mimics birds.

 

That doesn't mean airplanes should have flapping wings....

 

They probably should if you are using the mimic word.

 

Overt lifting surfaces on birds and aircraft may both be called wings. How does this suggest that aircraft mimic birds. Maybe your assertion suggests that most people are non discerning about this...don't know the difference.

 

It's one of the the gross oddities of aircraft that the wings do not mimic the wings of birds to create thrust (flapping). So this is a bad example to pull for the mimicry theme. But yes, when gliding, mimicry might be a legitimate relationship, if one was desparate to prove a point, or interested in glider design.

Link to comment
Share on other sites

Mimicry can take many forms and does not need to be exactly like what is being mimiced.

 

For example, to mimic a bird can mean running around the front yard flapping your arms around in front of a group of children.

 

"What am I" you shout. And the kids say a "bird".

 

Mimicing a bird can mean "flying through the air" in the sense that birds also "fly through the air", even if the means by which you are doing so is with a jet pack. You are mimicing an aspect of birds: the act of "flying through the air".

 

Mimicing a bird can mean studying the physics that makes bird flight possible and applying the same physics to make a heap of metal fly through the air.

 

And finally, mimicing a bird can mean denying all of the above.

 

C

Link to comment
Share on other sites

Mimicry can take many forms and does not need to be exactly like what is being mimiced.

 

For example, to mimic a bird can mean running around the front yard flapping your arms around in front of a group of children.

 

"What am I" you shout. And the kids say a "bird".

 

Mimicing a bird can mean "flying through the air" in the sense that birds also "fly through the air", even if the means by which you are doing so is with a jet pack. You are mimicing an aspect of birds: the act of "flying through the air".

 

Mimicing a bird can mean studying the physics that makes bird flight possible and applying the same physics to make a heap of metal fly through the air.

 

And finally, mimicing a bird can mean denying all of the above.

 

C

 

Then there's "Mimic"(1997)...

 

mimic-long-john-design1.jpg

Link to comment
Share on other sites

I can't leave this strand without saying that I am unconvinced by all Carl's arguments re film grain.

1. He emphasizes a single case where image noise provides an information advantage, when a high bit-depth image is reduced to low-bit depth whence the noise introduces dither. That advantage is not reaped in our technologies, where the film grain noise is of low modulation (as seen or optically recorded) and where there are no dramatic transforms in bit depth. Anyhow, our transform could itself introduce dither.

2. Information advantage is not at all what film grain provides to stills or to movies. Gestalt psychology (and the allied aesthetics) studies what it provides, information theory doesn't.

 

To Greg, eye saccades are not tiny sub-sensor jitters, but of several degrees (as explained back in post #43).

 

Concerning the camera mimicking eye. The original "camera obscura" mimicked the eye in two important ways. It used a lens to make a real 2-dimensional image from the 3-dimensional scene outside. And it was enclosed to prevent the scene's light from falling on the image except via the lens. Few further developments in cameras relate to the eye. An iris was added to the lens to adjust the image brightness on the receiving medium, while incidentally modifying the depth of field. The eye is shutterless. The eye focuses differently from cameras.

 

Color films and color video mimic the eye insofar as they incorporate sensors with three different spectal sensitivities. Video color is more eye-like when it recodes the three channels as YUV. The now accepted opponent process theory of color vision works similarly. Video cameras are altogether more eye-like insofar as their sensor plane issues signals to some "higher" functioning something -- computer. Film functions dumbly by directly turning the sensor into the displayer. This is what prevents a reversal color film from making correct colors. (Maxwell figured that out in around 1850.) Video can, in theory, make correct colors.

Link to comment
Share on other sites

I can't leave this strand without saying that I am unconvinced by all Carl's arguments re film grain.

1. He emphasizes a single case where image noise provides an information advantage, when a high bit-depth image is reduced to low-bit depth whence the noise introduces dither. That advantage is not reaped in our technologies, where the film grain noise is of low modulation (as seen or optically recorded) and where there are no dramatic transforms in bit depth. Anyhow, our transform could itself introduce dither.

2. Information advantage is not at all what film grain provides to stills or to movies. Gestalt psychology (and the allied aesthetics) studies what it provides, information theory doesn't.

 

To Greg, eye saccades are not tiny sub-sensor jitters, but of several degrees (as explained back in post #43).

 

 

 

Whether one calls such 'saccades' or coins a new term for such, there are involuntary micro jitters occuring in natural perception, and without which the perceived image will disappear. In this link, such micro-jitters are part of what is known as "saccades". But if there is a more technical term for such then we can, by all means, adopt that. I certainly wasn't the one who introduced the term "saccade" to describe what I was otherwise talking about.

 

http://www.ncbi.nlm.nih.gov/books/NBK10991/

 

http://www.ncbi.nlm.nih.gov/books/NBK11156/box/A1346/?report=objectonly

 

 

Now there is no necessity for the length of a jitter span to be less than the diameter of a single cell. Any length will do. In terms of a proposed sensor, a wider jitter span would be better than a smaller one since it will increase the variability and range of neutralisation. But a wider span will also mean more energy required to achieve such. So no doubt a sweet spot can be found to satisfy arguments between artists and economists.

 

1. He emphasizes a single case where image noise provides an information advantage, when a high bit-depth image is reduced to low-bit depth whence the noise introduces dither. That advantage is not reaped in our technologies, where the film grain noise is of low modulation (as seen or optically recorded) and where there are no dramatic transforms in bit depth. Anyhow, our transform could itself introduce dither.

 

 

There is a dramatic visual difference between film and video. The best scenario is to screen both side by side. Using the same image will be a help. One might employ a beamsplitter and some careful framing to better appreciate the difference in question.

 

Now explaining the difference one will see, between film and video, will be far more difficult than seeing the difference. One may end up resorting to poetic turns of phrase, which is perfectly apt in my books.

 

But occasionally one wants to put one's finger on it. To do more than wax lyrical. But it's completely unnecessary. We can shoot film for reasons that have their origin in what is perceivable rather than what is explainable.

 

Colorists will know (or claim quite passionately) that there are more tones and colours to be extracted from a digital scan of film than from digital that has directly encoded light. And this should tell us something. That something is going on here.

 

It is easy to dismiss such enthusiasm as if it were some sort of delusion that artists might be entertaining. Alternatively we can assume there is some truth in such and see if we can't explain it.

 

All I'm doing is providing an explanation in the best way I can. If there are better explanations I'm all ears.

 

Now Dennis claims, without demonstrating in any way (be it by explanation, pictures, poetic turns of phrase, etc) that:

 

"dither does not provide any advantage in our technologies".

 

We're just supposed to believe it.

 

If I've done anything at all it is show that an advantage does exist and that it must continue to exist as the bit-depth is increased. As one increases the bit-depth the error in the signal is reduced. But it can't be reduced to zero. The error is always there.

 

Dithering localises bit-depth errors (no matter how small those errors are) and redistributes the corresponding correction to adjacent pixels, throughout the entire image space.

 

I've made this obvious in terms of images so that you don't need a degree in statistics in order to get a good sense of it. If one believes those images are misleading then one can just explore it further and find where the misleading has occured.

 

Dennis says that our transform could introduce dither, but as I've argued it is too late to do dithering once you've baked in the digital signal. The bit-depth errors you might have been hoping to correct are no longer correctable.

 

 

2. Information advantage is not at all what film grain provides to stills or to movies. Gestalt psychology (and the allied aesthetics) studies what it provides, information theory doesn't.

 

As should be evident from the images and explanation provided this conclusion is incorrect. Certainly an image provided by stills and movies will be psychological. But it's not just psychological. Information theory plays an important role.

 

What we see is not just in our heads.

 

C

Link to comment
Share on other sites

I should just add that what is being explored here is not meant to be an attack on digital. One can have a preference either way and it wouldn't make any difference, for what is being explored is not whether one is better than the other (in terms of whatever criteria one uses) but how to explain the difference between the two.

 

I enjoy the qualities of both. And the qualities of what other, yet to be built, technologies might provide.

 

C

Link to comment
Share on other sites

Another note I might just make is that 'information theory' has a certain historical bias towards the digital, since it was conceived in the early days of digital (by Claude Shannon) and was based on notions of reliable communication, where noise was considered the enemy.

The field has widened since then but is still biased in terms of communication theory - where the emphasis is on tests between what is received against what is sent. This bias can skew the emphasis being tackled here.

 

If we borrow some concepts from information theory it's not to reinstate notions of fidelity in communication. Indeed the philosophical background I've been using is to actually dismiss concepts of such. It is the result (and the process employed) which becomes reality. An artist works with this reality. Manipulates it. Or defers to various technologies that do so. Or creates such technology. Intentionally or otherwise a particular result is created.

 

The discussion is still about this particular result, not about whether it communicates anything, but how one might create a particular result, or not as the case may be.

 

C

Link to comment
Share on other sites

Clarity and brevity are the advantages to internet information sites like this one.

 

It's actually quite useful to practice condensing your knowledge for others to properly make use of it.

 

Yes, I don't know if I'm doing a good job of it. The more I try to condense, the more words I seem to be generating! But I think my latter posts are better than my earlier ones in this regard. Their all trying out different ways to condense the same thing. There's some better lines in the later posts that I think zero in on the concept, and if not hit the nail on the head, come close to such.

 

C

Link to comment
Share on other sites

Now Dennis claims, without demonstrating in any way (be it by explanation, pictures, poetic turns of phrase, etc) that:

 

"dither does not provide any advantage in our technologies".

 

We're just supposed to believe it.

 

Carl, behave yourself! I didn't write what you say I wrote and put in quote marks, above. What I wrote, and what you copied above your post and presumably sort-of read, was this:

 

 

I can't leave this strand without saying that I am unconvinced by all Carl's arguments re film grain.

1. He emphasizes a single case where image noise provides an information advantage, when a high bit-depth image is reduced to low-bit depth whence the noise introduces dither. That advantage is not reaped in our technologies, where the film grain noise is of low modulation (as seen or optically recorded) and where there are no dramatic transforms in bit depth. Anyhow, our transform could itself introduce dither.

 

I said that your illustration of noise producing useful dither was not applicable to our technologies because your noise had large amplitude, unlike film grain noise, and also because you are reducing the original 8-bit image to 2-bits. It was a dramatic illustration but irrelevant. You can't produce a corresponding illustration where the noise is film grain-like and where the output is 8-bits as all output today is. (Your input can be greater bit depth if you wish.) You've leaped over the quantitative to the qualitative where your arguments are just amusing, not serious.

 

Of course dither provides advantage to our technologies. It allows an 8-bit monitor to show 10-bit video without the banding that usually results from reduction to 8-bit. Apple QuickTime does this on my 8-bit monitor when it plays ProRes, but interestingly it does not do it when it plays uncompressed 10-bit. That dither is created by QuickTime, it is not in the original 10-bit video (which has no grain or noise). That is the meaning of my final sentence quoted just above. When a transform must knock 10-bit down to 8-bit it has the option of representing the lost 10-bit levels by wee patterns of the neighboring 8-bit levels just above and just below.

 

Example: 10-bit 500 becomes 8-bit 125. 10-bit 504 becomes 8-bit 126. So what does the player do when it encounters 10-bit 501? It makes a fine pattern of 25% 8-bit 125 and 75% 8-bit 126. It looks pretty good. People may download my banding test clips and see how they play on their systems.

 

With such a transform you didn't need to add 10% noise to the face that was to become 2-bit. You simply used the wrong transform. Video post production is a chain of transforms rather than a chain of re-imagings as film post production was. Video transforms can be quite intelligent.

Link to comment
Share on other sites

Dennis has perhaps hit upon a much briefer explanation that is probably better than my long winded one but entirely consistent with such

 

Basically Dennis has asked if there are not more colours in a digital screen than the bit-depth would suggest. For example, a 24 bit screen is said to have 16,777,216 colours but between one pixel and another can be supposed the induction of an in-between colour. Increase the number of pixels you are looking at and the number of possible colours will increase. Or in black and white the number of tones will increase.

 

This would be the psychological side we might say.

 

Dithering is able to exploit this phenomena by refactoring a source signal in such a way that the result occupies this additional domain.

 

And in terms of film/digital comparisons, the tones or colours encoded in film will be related to that which a digital sensor has otherwise failed to see. And such colours/tones will have a certain clarity. They will not be just random (neutral) colours.

 

Painters have known this for a long time and exploited such. Pointillism is a particularly overt case of such, but all paintings have exploited this. Van Gogh was a master of such.

 

C

 

BIO_Biography_Vincent-Van-Gogh-Alienated

Link to comment
Share on other sites

 

With such a transform you didn't need to add 10% noise to the face that was to become 2-bit. You simply used the wrong transform. Video post production is a chain of transforms rather than a chain of re-imagings as film post production was. Video transforms can be quite intelligent.

 

 

I write software for a living. Software that deals with this sort of thing all the time. I write the algorithms (some quite complex) that implement all sorts of processing pipelines. But that doesn't matter. What matters is whether it's correct.

 

As I've said many times now, the images I posted were to exaggerate the process, so that one can get an idea of what is otherwise happening in the back of one's head.

 

My later post, with the neutral squares, addressed the concerns you raised. There I used 4% noise instead of 10%, and I showed a 1 bit sample, a 4 bit sample, and an 8 bit sample, of the same source.

 

My apologies if I misread that previous post. Sometimes one can be too brief!

 

C

Link to comment
Share on other sites

No. No. No. You have misunderstood my comments about color dithering in recent posts #58 and #60 on a parallel strand named "What's the Attraction?".
I like that Van Gogh self-portrait, but the discussion in here should stay monochrome. In here, dither is a means to display intermediate tones.

We do need more tones than 8-bit can provide as shown by optimal Hecht coding. But we hardly need more colors (sans tones) than 8-bit can provide. The motivation for color dithering -- pointillism, etc. -- was not for precision but expansion. See that other strand.

Link to comment
Share on other sites

No. No. No. You have misunderstood my comments about color dithering in recent posts #58 and #60 on a parallel strand named "What's the Attraction?".

I like that Van Gogh self-portrait, but the discussion in here should stay monochrome. In here, dither is a means to display intermediate tones.

We do need more tones than 8-bit can provide as shown by optimal Hecht coding. But we hardly need more colors (sans tones) than 8-bit can provide. The motivation for color dithering -- pointillism, etc. -- was not for precision but expansion. See that other strand.

 

Dither is an appropriate technique for monochrome or colour, but we can keep the discussion here to monochrome.

 

I agree entirely that precision is not the task.

 

If I use terms 'error' and 'correction' it is not without misgivings.

 

If we want to exploit the additional tonal capacity of a particular bit-depth screen we need a signal with which to 'paint' into that additional tonal space.

 

So we need a signal that would possess more tones than the bit-depth of the screen we're using would suggest. Otherwise we won't have any signal with which to populate that additional tonal space.

 

The terms 'error' and 'correction' relate to how we will paint into that space from our larger tone signal. Basically for every pixel we paint there will be an error between what the pixel can display on it's own and what we otherwise want to see (or hallucinate).

 

We introduce the idea of a 'correction' in which the said pixel can't help us, but adjacent pixels can. We distribute the required correction to the adjacent pixels. This is how we do it digitally.

 

Now the great thing about film is that does something remarkably similar, but by analog means.

 

C

Link to comment
Share on other sites

........

To Greg, eye saccades are not tiny sub-sensor jitters, but of several degrees (as explained back in post #43).

 

 

Hey Dennis,

Yes I understood that. Which is why I didn't feel a legitimate candidate for the mimicry theme.

 

For Carl...

When an aircraft flaps its wings as if to play charades for the children, then yes, it's mimicing a bird...

Link to comment
Share on other sites

 

Hey Dennis,

Yes I understood that. Which is why I didn't feel a legitimate candidate for the mimicry theme.

 

For Carl...

When an aircraft flaps its wings as if to play charades for the children, then yes, it's mimicing a bird...

 

Gregg would have us believe that any correlation we might make, between the micro-jitter a sensor makes, and the micro-jitter an eye makes, can't be made because such movement in eyes are not called "saccades".

 

So call them something else.

 

I wasn't the one to call them saccades at all. That's entirely something that Dennis assumed, and Greg would like to suggest that he too has assumed.

 

So fair enough. If Gregg wants to wear some egg on his face, who am I argue.

 

And I wasn't the one to suggest the correlation be called "bio-mimcry". Indeed I suggested the term "bio-inspired" or "bio-techno-convergence". But bio-mimicry will do.

 

I've never seen an aircraft playing charades, but if Gregg has seen such then I think he's perfectly entitled to call such mimicry.

 

C

Link to comment
Share on other sites

 

Gregg would have us believe that any correlation we might make, between the micro-jitter a sensor makes, and the micro-jitter an eye makes, can't be made because such movement in eyes are not called "saccades".........

 

So fair enough. If Gregg wants to wear some egg on his face, who am I argue......

 

I've never seen an aircraft playing charades, but if Gregg has seen such then I think he's perfectly entitled to call such mimicry.

 

 

I thought that micro jittering of pixels, translations equivalent to tiny angular displacements, were being likened to large angular displacements of the eye, the relationship being made using the mimic word. I thought that was a bit of a stretch.

 

The anecdote or joke about flapping wings for the kids playing charades or whatever...made me think we have a different version of the meaning of the word mimic or mimicry.

 

In an effort not to be willfully missunderstood. It's a bit weak saying that aircraft mimic birds. Any good aero engineer will admit that they don't (there are a tiny number of aircraft that actually do flap their wings). Hence my jibe about a flapping wing aircraft playing charades for the benefit of children.

 

You should just relax, enjoy the view, but it will be a long, long wait. Meanwhile, what does the egg taste like.

Edited by Gregg MacPherson
Link to comment
Share on other sites

Hi Gregg,

 

I was being a bit of a smart arse. My apologies.

 

 

I did not suggest any correlation be made between a micro-jittering sensor and saccadic motion, or any other motion that an eye makes.

 

Be that as it may, once mentioned, I looked it up. And it turns out there are micro-jitters that an eye makes, different from and smaller than what is generally called saccadic motion. One article I found calls these "micro-saccadic" motions and "drift". Those called drift have an oscillation not much bigger than the size of a cell. The reason suggested for these cell sized oscillations is that without such oscillation the image will disappear. They do an experiment to prove it.

 

Whether this might be of use in a better movie camera requires to be seen. The Aaton tried it, but ran into some financial difficulties I believe.

 

If we're going to find similarities between things, those similarities do not need to be exactly the same. That's why we call them similar rather than identical. Mimicry is very much like that. Its not necessary to imitate all the characteristics of something in order to express an idea of that thing. One can often sketch the barest of outlines, and the thing so sketched can be indicated by such.

 

Also, it's not necessary to imitate what something looks like in order to mimic that thing. In bio-mimcry we're more interested in certain properties of a thing rather than the thing as a whole.

 

Consider solar energy panels. We can say that these mimic leaves on a tree, in the sense that leaves on a tree are solar energy panels. But in making this link it's not to confuse solar energy panels with leaves on a tree. We are looking for ways to improve solar energy panels, and by looking at how leaves on a tree are able to collect solar power efficiently we can apply ideas from such to some new way of making solar energy panels

 

Bio-mimicry is summed up quite well by Wikipedia:

Biomimetics or biomimicry is the imitation of the models, systems, and elements of nature for the purpose of solving complex human problems.[1] The terms biomimetics and biomimicry come from Ancient Greek: βίος (bios), life, and μίμησις (mīmēsis), imitation, from μιμεῖσθαι (mīmeisthai), to imitate, from μῖμος (mimos), actor. A closely related field is bionics.[2]

Living organisms have evolved well-adapted structures and materials over geological time through natural selection. Biomimetics has given rise to new technologies inspired by biological solutions at macro and nanoscales. Humans have looked at nature for answers to problems throughout our existence. Nature has solved engineering problems such as self-healing abilities, environmental exposure tolerance and resistance, hydrophobicity, self-assembly, and harnessing solar energy.

 

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...