Recently, I started a brand new React project, router v4 was near release, and a few other bits that were new (to me), using Webpack 2 for the first time, and a few other bits... to be honest, it was about as frustrating an experience as I've had in bootstrapping a new project. Now, it's not as frustrating, but it's really the first time I felt *that* frustrated, so I can feel the pain of a lot of others.
That said, I would never want to go back to the old days of manually bundling, or using script runners for bundle/reduce... Or doing manual namespacing in JS to avoid collisions in a large project. Modern ES6 (or node/cjs) modules are much cleaner, and you can take my Babel before browsers support all the current stage-2+ features I use from my cold dead fingers. It'll bee 3-4 years before that really happens. And it will be weird the transition from build/deploy bundling to JS modules from the browser, and HTTP2 server-push. I still prefer the way JS is written today vs any time before.
I use a lot of async functions, and some ES6 classes (sparingly) where needed. There are a lot of great things in writing modern JS. The flip side is evaluating modules in npm, and keeping up with some of the proliferation without falling into the trap before they're ready, or likely to take established roots.
Interesting... reminds me a lot of ASP.Net MVC attribute decorators for controllers... though would be interesting to have some shortcuts, or other defaults (based on naming), but then you may as well replace express.
Aside, two signup forms taking over the screen *before* you see any content.. bad form... I don't mind it quite as much when it detects your cursor heading out of the display, but before you look at anything, really?
The article itself is pretty naive and misleading. Not to mention incomplete, it's most likely clickbait. There's no depth to any of the examples, and mostly just leaves anyone that would be interested wanting without a means to grow.
I really wish that there was more effort to get the underlying data for Moment into the browsers, so the likes of moment itself could be *much* smaller...
because `require('stream').Through` pretty much does everything through2 does...
new Transform({
objectMode: true,
transform(chunk, enc, cb) {},
flush(cb) {}
});
Is there something through2 inherently does that this doesn't?
The internals provide... I mean, you can either inherit from the template streams, or you can use streams that actually serve a purpose.. Readable handles backpressure by default, not sure why you'd want from2 or through2 ... end-of-stream comes down to knowing when to listen for 'end' (readable/through) and 'finish' (writable).
pump actually seems to serve a purpose... as does the likes of split2, and others... but the most basic readable/transform/writable bases are covered in the box, and it's really better to use them than bring in potentially a bunch of extra dependencies.
Also, don't roll your own CSV library, there's a couple decent ones already... ymmv, but there's a lot of edge cases to CSV parsing, and you will probably come across some issues if you aren't very careful.
Grr.. please don't use the modules the article mentions... use the built in, extensible streams...
import { Readable, Writeable, Transform, Duplex } from 'stream';
All you have to do is implement the minimal override in your own version.. for example...
import { Tranform } from 'stream';
// relies on split2 being run before this filter
export default class FileOutputSettingsFilter extends Tranform {
__line = 0;
constructor(options) {
super({ objectMode: true });
}
_tranform = (chunk, enc, cb) => {
const line = ++this.__line;
if (line > 3) return cb(null, chunk);
if (line = 2) this.emit('fileOutputSettings', chunk);
cb();
}
}
This lets me pluck the second line of input from a file, while passing everything after the third line through... in the case above, there is some prefixed data before CSV data at the top of the file.
It's easy enough to create readable/writeable streams as well... I have a few that will output to Message Queue services, or logging, etc.
because `require('stream').Through` pretty much does everything through2 does... new Transform({ objectMode: true, transform(chunk, enc, cb) {}, flush(cb) {} }); Is there something through2 inherently does that this doesn't? The internals provide... I mean, you can either inherit from the template streams, or you can use streams that actually serve a purpose.. Readable handles backpressure by default, not sure why you'd want from2 or through2 ... end-of-stream comes down to knowing when to listen for 'end' (readable/through) and 'finish' (writable). pump actually seems to serve a purpose... as does the likes of split2, and others... but the most basic readable/transform/writable bases are covered in the box, and it's really better to use them than bring in potentially a bunch of extra dependencies.Grr.. please don't use the modules the article mentions... use the built in, extensible streams... import { Readable, Writeable, Transform, Duplex } from 'stream'; All you have to do is implement the minimal override in your own version.. for example... import { Tranform } from 'stream'; // relies on split2 being run before this filter export default class FileOutputSettingsFilter extends Tranform { __line = 0; constructor(options) { super({ objectMode: true }); } _tranform = (chunk, enc, cb) => { const line = ++this.__line; if (line > 3) return cb(null, chunk); if (line = 2) this.emit('fileOutputSettings', chunk); cb(); } } This lets me pluck the second line of input from a file, while passing everything after the third line through... in the case above, there is some prefixed data before CSV data at the top of the file. It's easy enough to create readable/writeable streams as well... I have a few that will output to Message Queue services, or logging, etc.