If this is your first visit, be sure to
check out the FAQ by clicking the
link above. You may have to register
before you can post: click the register link above to proceed. To start viewing messages,
select the forum that you want to visit from the selection below.
I'm kind of upset with myself for not being able to keep up with it. That's what I get for getting a non-programming job and letting my knowledge atrophy I suppose.
Indies by their nature don't answer to anyone and don't have to use the common toolchains, but the vast majority do because using a different language rarely comes with enough efficiency benefits to offset losing the library of available code already constructed for the most dominant languages. By the same token, such applications by themselves cannot change the balance of power because such code is rarely published to the public or even commercial domain, and even if it were, splitting the effort amongst different environments would result in no centralized repository of improving tools.
I don't know. Unity 3D is fairly popular among indies, and that's not C++.
By the same token, there haven't been any languages that have come down the pipeline that offer enough improvement in workflow to convince much of anyone to switch.
I don't think it's a coincidence that the last super big shift in the software industry was to jump from C++ to Java and C#, both of which were backed by large corporations with a lot of marketing money to throw around. Languages and technologies get chosen more because of marketing and not being too different from what people are already using than because of their technical merits. Java is a really mediocre language, unless your only point of comparison is C++. If it was a matter of productivity, then more people would be using F# over C#. Or perhaps OCaml, which is what F# is a dialect of.
It's not like you have to give up all the libraries and frameworks and even legacy code to switch to another language. F# interoperates with other .NET languages and Scala interoperates with Java.
I don't know. Unity 3D is fairly popular among indies, and that's not C++.
Unity uses JavaScript, C#, or Boo. Hardly mold-breaking.
I don't think it's a coincidence that the last super big shift in the software industry was to jump from C++ to Java and C#, both of which were backed by large corporations with a lot of marketing money to throw around. Languages and technologies get chosen more because of marketing and not being too different from what people are already using than because of their technical merits. Java is a really mediocre language, unless your only point of comparison is C++. If it was a matter of productivity, then more people would be using F# over C#. Or perhaps OCaml, which is what F# is a dialect of.
All of the C derivatives are bad for productivity, up to and including C#. Java is horribad slow under most architectures, but develops and iterates quickly, so can be useful for productivity.
I haven't looked at F# since its early inception but a quick glance at the more modern implementation still shows things about it that I don't like; it uses inferred typing at compiler time, which in my opinion causes far more bad habits than any amount of OOP overloading.
It's not like you have to give up all the libraries and frameworks and even legacy code to switch to another language. F# interoperates with other .NET languages and Scala interoperates with Java.
Interoperability is not the same thing as homogeneous code. The more frames of mind you have to be in to follow a process, the more likely a mistake will be made. I am not a particular fan of building pieces of an application under different paradigms; it increases code complexity and maintenance overhead. While there are specific use cases where this can be beneficial because some languages are just bloody awful at some operations (virtually all C variants when working with string data, for instance), it's not something I would ever advocate for an application designed to be a lasting tool.
Unity uses JavaScript, C#, or Boo. Hardly mold-breaking.
My point isn't that they're mold-breaking. It's natural that they're not, since most people come from an OOP background. The point is that clearly there's a demand for a language that's not C++ among indie developers, and clearly some people are trying to meet that need, so I think it's unfair to deny that indies try out alternative languages or frameworks. Especially when the move from manual memory management to garbage collection is probably the biggest step forward the software industry has made in the last two decades.
Originally posted by Icemage
All of the C derivatives are bad for productivity, up to and including C#. Java is horribad slow under most architectures, but develops and iterates quickly, so can be useful for productivity.
This is kind of tangential but I'm curious why you consider Java good for productivity and not C#. C# is essentially a Java clone that picked up new features faster than Java. Java doesn't even have higher-order functions. I'm also fairly sure Java (and C#) haven't been slow for quite a while, provided the application's had some time to "warm up" and let the JIT compiler kick in, but then "slow" is relative.
Originally posted by Icemage
I haven't looked at F# since its early inception but a quick glance at the more modern implementation still shows things about it that I don't like; it uses inferred typing at compiler time, which in my opinion causes far more bad habits than any amount of OOP overloading.
What bad habits?
Type inference is borderline necessary if you want higher-order functions to be easy to use. Besides, the bureaucracy of type annotations is probably the main reason people like dynamically unityped languages like Python or Ruby. If so many people are willing to cripple themselves with dynamic typing, clearly they think mandatory type annotations everywhere is overkill. You can include the types of variables if you want to, and everything is perfectly type safe even if you don't.
If you're willing to gloss over OOP's multiple shortcomings because you have Best Practices and Coding Standards, I don't see why you'd hold type inference against F# - especially since documenting your code is mandatory everywhere regardless of language. In the very worst case, the IDE still knows the types.
Originally posted by Icemage
Interoperability is not the same thing as homogeneous code. The more frames of mind you have to be in to follow a process, the more likely a mistake will be made. I am not a particular fan of building pieces of an application under different paradigms; it increases code complexity and maintenance overhead. While there are specific use cases where this can be beneficial because some languages are just bloody awful at some operations (virtually all C variants when working with string data, for instance), it's not something I would ever advocate for an application designed to be a lasting tool.
That's all true, but the transition to a new technology or paradigm will pretty much always be uncomfortable. You can't ask for much more than good backwards compatibility. Somebody's gotta take the plunge somewhere, or we'd all still be coding in C.
That's all true, but the transition to a new technology or paradigm will pretty much always be uncomfortable. You can't ask for much more than good backwards compatibility. Somebody's gotta take the plunge somewhere, or we'd all still be coding in C.
I'm all for progress. I just ask that we progress to a language where I can understand the code I wrote six months ago.
I don't want clever, I don't want the bleeding edge. Just something clear and readable, so I write fewer bugs and have easier time fixing stuff later.
Bamboo shadows sweep the stars,
yet not a mote of dust is stirred;
Moonlight pierces the depths of the pond,
leaving no trace in the water. - Mugaku
Which is exactly why it's silly to be against the low barrier to entry for indie development. From what I've read - and hopefully someone else can chime in here - a very large portion of the professional game development is toxic - incredibly unhealthy work/life ratios, high turnover rates, and lower wages/benefits than being an average enterprisey software developer. They have a huge pool of young, unmarried people they can chew up and spit out over a short time span. With a process like that, a relatively low number of people last long enough to become experts. On top of that, the state of the art still involves using an antiquated and tragically flawed programming language (C++) and a very misunderstood and problematic programming style (object-oriented).
Letting anyone with a computer try to make a game may produce a lot of garbage but hopefully we'll see some progress in game development techniques and pass them on quicker and to a wider audience than the industry currently can.
I don't know anything about programming, but I can comment on the developer culture if anybody's interested.
I don't know anything about programming, but I can comment on the developer culture if anybody's interested.
At least I'm interested, although it's probably better to re-start that discussion in the XBox 1 thread.
Originally posted by ItazuraNhomango
I don't want clever, I don't want the bleeding edge. Just something clear and readable, so I write fewer bugs and have easier time fixing stuff later.
I can understand that. I just think we've reached the limits of what we can do using OOP. I could go into examples of how functional languages improve upon the usual way of doing things, but I imagine you guys are getting tired of my soapboxing by now.
The point is that clearly there's a demand for a language that's not C++ among indie developers, and clearly some people are trying to meet that need, so I think it's unfair to deny that indies try out alternative languages or frameworks.
Oh, no doubt. C++ (and even the farther offshoots like C#) have an iteration workflow that can be best described as "ponderous". Even with advanced compilers, it still takes a lot of time to recompile a build in all C variants compared to pretty much everything else.
This is kind of tangential but I'm curious why you consider Java good for productivity and not C#.
Java is faster to iterate and test-bed code on compared to C#. It's also less likely to produce an unrecoverable (and thus difficult to trace) fail state if there is a coding error.
What bad habits?
Relying on compiler intuition for data typing often causes issues when working with fractional values, and especially when there is a need to transliterate to text/string formats. I'm a big believer in explicit data typing, and letting the compiler take its best guess at it is sloppy at best and headache-inducing at worst if it causes problems like unforseen rounding errors.
And yes, I dislike PHP/Python/Ruby and other unitype languages for similar reasons.
That's all true, but the transition to a new technology or paradigm will pretty much always be uncomfortable. You can't ask for much more than good backwards compatibility. Somebody's gotta take the plunge somewhere, or we'd all still be coding in C.
I agree, but I don't think bolting on pieces of F# onto a C codebase is going to get you where you want to go. You can't really transition that schema to anything else, or even slowly convert the legacy C portions to F#. It's just an awful solution all around; if a language is to be seriously considered as a contender as a dominant language, it really ought to be feature-complete, not riding the coat-tails of what came before.
Java is faster to iterate and test-bed code on compared to C#. It's also less likely to produce an unrecoverable (and thus difficult to trace) fail state if there is a coding error.
I remember seeing somewhere a study that states Java is a significant improvement to programmer productivity compared to C++, but Python well ahead of both.
I just think we've reached the limits of what we can do using OOP. I could go into examples of how functional languages improve upon the usual way of doing things, but I imagine you guys are getting tired of my soapboxing by now.
A (pure) functional programming language is something that every programmer needs to experience, but I'm not sure if it's the right kind of language for the most jobs. I think the majority of people (including myself) can conceive easier step by step path to solutions--imperative programming--than coming up with functional approaches for most problems. It's just more natural for humans, I think.
Functional languages have been around forever, and they haven't been able to make the case that their conceptual purity is worth the extra mental energy for most uses. Not yet, anyway.
Bamboo shadows sweep the stars,
yet not a mote of dust is stirred;
Moonlight pierces the depths of the pond,
leaving no trace in the water. - Mugaku
I think if you include Go (from Google) as a C variant, that won't hold.
I don't consider Go to be a C variant any more than Esperanto is a Latin variant. It's a neat language, though.
I remember seeing somewhere a study that states Java is a significant improvement to programmer productivity compared to C++, but Python well ahead of both.
Python is indeed a much more productive language than Java or C++. C++ is actually one of the least productive programming languages ever conceived, aside from really low level stuff like Assembler or ridiculously verbose syntax languages like LISP or COBOL.
A (pure) functional programming language is something that every programmer needs to experience, but I'm not sure if it's the right kind of language for the most jobs. I think the majority of people (including myself) can conceive easier step by step path to solutions--imperative programming--than coming up with functional approaches for most problems. It's just more natural for humans, I think.
Functional languages have been around forever, and they haven't been able to make the case that their conceptual purity is worth the extra mental energy for most uses. Not yet, anyway.
I don't think ideological purity from either side is the appropriate viewpoint. Some applications lend themselves more fluidly to one approach than others, and sometimes it just doesn't matter. In general, I consider any fast "throw-away" tool like a bit of analytical code or a one-shot process to strongly favor functional programming techniques, while most processes that are built over many iterations over a long development standpoint or multiple programmers definitely favors OOP. There are of course exceptions to any rule, but these are "tendencies", and I think both approaches have value and should be evaluated case-by-case based on need.
Even with advanced compilers, it still takes a lot of time to recompile a build in all C variants compared to pretty much everything else.
D compiles lightning fast, fast enough to use it as a "scripting language" for the sort of throwaway command line programs you'd write in, say, Python.
Originally posted by Icemage
Java is faster to iterate and test-bed code on compared to C#.
That still doesn't answer why. C#'s features are pretty much a superset of Java's, so how can Java's iteration times be shorter?
Originally posted by Icemage
Relying on compiler intuition for data typing often causes issues when working with fractional values, and especially when there is a need to transliterate to text/string formats. I'm a big believer in explicit data typing, and letting the compiler take its best guess at it is sloppy at best and headache-inducing at worst if it causes problems like unforseen rounding errors.
And yes, I dislike PHP/Python/Ruby and other unitype languages for similar reasons.
Type inference and dynamic typing are very different. Dynamic typing masks errors, type inference doesn't. I haven't used F#, but I do use Standard ML which is what F# is based on, and I'm fairly sure they use the same type inference algorithm. Standard ML is extremely strongly-typed. There are no type coercions - you can't even mix integers with floats (e.g. 3 + 4.5 is an error). Save for one corner case*, the compiler can not only determine the correct types of variables, it'll determine the most general type. You get genericity for free. For example, consider this function that simply returns its argument:
fun identity(arg) = arg
This function works for any type, and the compiler correctly infers that the type is 'a -> 'a, where 'a is a type variable similar to the T often used in Java interfaces like List<T>. In other words, it infers that the argument can be any type, and that the return value must be the same type. That example is fairly trivial, but like I commented earlier, its real value starts to show once you start passing functions around. Let's write a function that composes two others:
fun compose(function1, function2) =
....fn arg => function1(function2(arg))
Where fn var => exp is the syntax for an anonymous function. What is the type of compose? (('a -> 'b) * ('c -> 'a)) -> ('c -> 'b). Or, to make it clearer:
* Type of arg 1: 'a -> 'b
* Type of arg 2: 'c -> 'a
* Type of return value: 'c -> 'b
Type annotations quickly get complicated once you start using high-order functions, even though what you're trying to do is often simple. You can just write what you mean and the compiler will verify that you haven't done something impossible.
If you tried doing that in Java, you'd need to do something ridiculous like:
Java
interface Function<A, R> {
....R eval(A arg);
}
public static <A, B, C> Function<C, B> compose(final Function<A, B> function1, final Function<C, A> function2) {
....return new Function<C, B>() {
........public B eval(C arg) {
............return function1.eval(function2.eval(arg));
........}
....};
}
God help you if you want to write an anonymous function. But passing functions around is still necessary, so instead you see a lot of hacky workarounds: classes/interfaces like FooProvider, BarFactory, and IClickable whose sole purpose is to pretend they're functions or using design patterns like Template Method or Strategy. So you complicate the design, or simply choose not to push all that boilerplate around and live with a certain amount of code duplication. Either way, you lose.
* The corner case
As I mentioned before, there IS one corner case:
fun add(a, b) = a + b
Are a and bints or reals (aka floats)? The compiler can't know, because the basic arithmetic functions are overloaded. My compiler defaults to choosing int. However, this only happens when the compiler has absolutely no information about the types. Even a single type annotation will completely disambiguate cases like these. For example:
fun add(a : real, b) = a + b fun add(a, b : real) = a + b fun add(a, b) : real = a + b
All completely disambiguate the function to be of type (real * real) -> real. If the function has any literals or constants, the whole problem goes away as well: fun circumference(radius) = 2.0 * PI * radius
So the ambiguity is very easily avoided and only comes up in some very specific cases. Still, what if it does bite you? Well, your program won't compile when it infers the parameters are integers and you tried to pass floats, so you'll still catch the problem.
Originally posted by Itazura
A (pure) functional programming language is something that every programmer needs to experience, but I'm not sure if it's the right kind of language for the most jobs. I think the majority of people (including myself) can conceive easier step by step path to solutions--imperative programming--than coming up with functional approaches for most problems. It's just more natural for humans, I think.
I don't believe it's more natural at all. It's what we're accustomed to putting up with, which is not at all the same. It is remarkably difficult to reason about imperative code, because side effects are hidden all over the place, and that destroys your ability to pick the code apart and reason about the statements on an individual basis. It is relatively straightforward to reason about functional code and prove it's correct, something that no amount of testing can do. We had a choice between languages close to a Turing Machine and languages close to Lambda Calculus. We chose the Turing Machine for efficiency reasons because we needed all the help we could get with that in the 70s-90s. But both hardware and functional languages have come a long way since then and the Turing Machine way of thinking isn't very amenable to humans. We haven't jumped ship yet because of the business problems Icemage put forth - first you have to convince people that functional programming is a good idea even though no one's doing it, and then you still have to deal with the hiring problem.
I'm not advocating pure functional languages either, because side effects are sometimes necessary and insisting that you can never use them is counterproductive. What I'm advocating is that we stop using it as the default way of solving things, because it's a pretty dangerous and seldom necessary default.
Originally posted by Icemage
In general, I consider any fast "throw-away" tool like a bit of analytical code or a one-shot process to strongly favor functional programming techniques, while most processes that are built over many iterations over a long development standpoint or multiple programmers definitely favors OOP. There are of course exceptions to any rule, but these are "tendencies", and I think both approaches have value and should be evaluated case-by-case based on need.
I think you're confusing "Programming with function calls and primitive types" with "functional programming". They're not the same; functional programming offers you a superset of what OOP does as well as a saner default approach to problems.
D compiles lightning fast, fast enough to use it as a "scripting language" for the sort of throwaway command line programs you'd write in, say, Python.
True, but who uses D, and why would/should they? See my comment below regarding interpreted vs. compiled development.
That still doesn't answer why. C#'s features are pretty much a superset of Java's, so how can Java's iteration times be shorter?
Because you can often prototype component functions in JavaScript to make sure they're conceptually sound before implementing them and incurring a compile/debug cycle cost. That's the big ticket item. No matter how fast the compiler is, it's never going to be as hassle-free as interpreted code. If you're just writing a building block function of some sort, it makes zero sense to compile your whole project just to test whether you've made some sort of basic error that isn't immediately visible to a compiler.
Type inference and dynamic typing are very different. Dynamic typing masks errors, type inference doesn't.
I'm aware of the difference. What I was referring to earlier is that, with a compiler that accurately infers data types, you still have the problem of the user taking the feature for granted and then still needing to recognize when they've made an error.
While I'm not personally a proponent for typing drudgery, there are common use cases where having explicit type declarations enforced is useful. Specifically when other people are reading your code (or vice versa). You may know that arg1 is a real number, but if your coworker Joe has to come in during your vacation and tries to figure out what's wrong with your code, that's one more thing he has to research.
I'm not advocating pure functional languages either, because side effects are sometimes necessary and insisting that you can never use them is counterproductive. What I'm advocating is that we stop using it as the default way of solving things, because it's a pretty dangerous and seldom necessary default.I think you're confusing "Programming with function calls and primitive types" with "functional programming". They're not the same; functional programming offers you a superset of what OOP does as well as a saner default approach to problems.
You're sort of preaching to the choir here. I dislike OOP for the same reasons you do; there's a whole generation of programmers out there who cut their teeth on it and insist on using it everywhere, even when inappropriate. I'm just not willing to throw it out in favor of functional programming paradigms because functional programming has its own set of challenges. There are times when state-based values can be both practical and efficient, and functional programming holds the concept as anathema.
Comment