- General Development
- Schema Development
- Apex Code Development
- Visualforce Development
- Formulas & Validation Rules
- Force.com Sites & Site.com
- Chatter Development
- Java Development
- .NET Development
- Perl, PHP, Python & Ruby
- Desktop Integration
- APIs and Integrations
- Visual Workflow
- Apple, Mac and OS X
- VB and Office Development
- AppExchange Directory & Packaging
- Salesforce Labs & Open Source Projects
- Other Salesforce Applications
- Jobs Board
01-28-2013 04:35 PM
Dear Salesforce Development Team: Please fix the Deployment / Unit Testing process.
We do NOT need to recompile every line of Unit Testing code for a single class update.
This results in 98% Wasted compiler time and a 15 minute wait time for every minor change - which could be completed in 30 seconds.
Contact me if you need ideas - I have many.
Why do we need to perform unit testing for classes that were compiled yesterday, and the day before that, and the day before that, and every day for the last 2 months - for a class that doesn't reference them?
This is the biggest waste of time by **far**. There are client-side and server-side solutions to address this.
This is a nightmare.
01-29-2013 04:40 AM
You can always click on the class name and click on Run Test. This will just run the test for that particular class only.
What I would like to see is the ability to run a test on a specific method in a class. That would be cool.
Personally, I don't see this as a catastrophe but I think some improvements could be had in some places.
01-29-2013 07:45 AM
01-29-2013 08:20 AM
It doesn't matter what you call it. It's "Running through" things that don't need to be run through.
Here's the scenario:
1) People do testing in sandbox
2) It works
3) People agree
4) People sign in blood
5) Enemies are made due to code that is slashed in this"one-time" update
6) Code is escalated to production
7) Code is tested in production (minimally because no one has time to do full testing)
8) Changes are demanded after 1-2 days
9) After one or two iterations of Step 8, Steps 1-5 are eliminated
10) Step 5a (Minor Code Change is inserted)
11) Steps 5a - 8 are repeated indefinitely.
1) Step 1, above is great for testing but it doesn't solve much for reasons discussed elsewhere
2) Step 8 will recur ***REGULARLY*** because end-users don't have the time or skillset to think it through and Management has a "make-it-happen" mentality
3) This leaves us with "completed, accepted, tested and unit-tested code" -> *with* *unending* *minor* *updates*.
Yes - We understand the *idea* - and that's what it is - an *idea*. Now we've found that the idea is flawed. And I think you will agree that optimized ideals are better/faster and more streamlined than inefficient ideals -> Hence the Feature Enhancement Request.
I'm not arguing that the ideal is incorrect - only that it is not implemented correctly. The "RUN THROUGH" is checking classes, objects, styles, whatever - **that are **unrelated** to the code**.
Think about Linux. It forces a Disk-Check **periodically** (and when requested) - not *every* time you add a user.
01-29-2013 06:35 PM
The fact is, though, that once step 11 is reached, there is a problem with usability and stability, and adoption of the CRM is likely to fall. Often, sometimes a seemingly innocuous update will cause such severe damage that, without a testing phase, would cause problems later.
As an actual, live example, one of our developers added a validation rule to our project. One. Single. Validation. Rule. Now, you'd like to think that everything would have been all fine and dandy. After all, no code at all was changed. Unfortunately, it actually caused 78 test failures during the pre-upload Run All Tests. Had this validation rule gone into effect in a customer's database without any warning, they would have immediately lost about 25% of the total functionality of our package, and that 25% covered about 80% of what makes our package useful.
So, while I agree that it would be nice to skip the tests altogether, especially for seemingly small updates, salesforce.com has decided instead to mitigate the risk of lost productivity from a botched update instead of making it easier for developers to deploy lousy code.
Personally, I applaud their brazen attempt to make developers produce better code. Not many companies would choose to go that route, and I believe, in the end, that developers that regularly develop on Force.com will have better programming practices in general, even if they move on to another field of programming (such as PHP or C++). Someone should make a study on that subject.
And, your analogy on Linux is comparing apples to oranges. A Linux user is like a salesforce.com record, and a Linux package is like Apex Code. So, when you modify records in salesforce.com, it doesn't Run All Tests, just like Linux doesn't do any strenuous testing when a user is added. But, when you install a package in Linux, it runs a test, a version dependency, runs pre-install triggers, modifies the system, runs post-install triggers, cleans up other packages, etc. Linux and salesforce.com are very much alike in design in that regard. Even a Microsoft Update creates a restore point so you can roll back a failed update.
So, while it seems like overkill, I feel like they've almost got it right... you are correct in saying that the system should isolate tests to just suspect items, but that truth is actually reasonably obscure when you consider how everything ties together. Even the "security scanner" takes 6 hours to fully map a decent size project for flaws. Imagine having to go through that period for each "major" upgrade at the savings of having shorter "minor" updates. I don't think the trade-off would be fair.
01-30-2013 01:15 AM
I have to agree with sfdcfox, the whole process helps a lot and for me, it's developed how I am as a developer. I will quite happily go to another language and run over the same processes every time I make a deployment/release as it makes life so much easier.
The thing to note is, Salesforce is catering for all types of deployments, whether it's minor changes or major changes, and whether a minor change is just a typo fix or a method refactor. At the end of the day, a typo fix could change the way an entire application works. This needs to be accounted for.
01-30-2013 07:41 AM
Yes, I was aware of the limit of the Linux analogy as I wrote it. I used a simplistic example because it *does* fit -> I'm talking about wasting time repeatedly.
You're still missing the point. I'm not talking about not running tests. I'm talking about not running tests that *don't relate* to the modified code.
I know the effect of the change I made. Perhaps it was a string change from 'XYZ' to 'ABC'. Perhaps I added a variable and produced a formula -> it's not going to affect anything in the Accounts, Contacts, Opportunities, [Insert 60 objects here] because it is a custom, isolated object. Perhaps, in a different scenario, I *do* customize something related to Accounts and Contacts -> it won't be related to 48 other tables. On the low-probability-but-eventual-scenario that I make a typo, I expect all the tests **related** to my code to be run to catch that typo. But you don't have to run tests for ***UNRELATED OBJECTS AND CLASSES***.
It sounds like you have the luxury to work in a static environment where a Project is scoped, designed, coded, tested and promoted. That's great for people with a full sandbox and no daily Change-Orders - but it doesn't work so well for the absolute minimum Config Sandbox where most of the world works and anyone in the company can request changes.
Here's the Solution
Instead of spending all that time running tests that don't relate to the single class being promoted, perhaps SFDC Development could focus on *identifying* the change-sets, running tests on the changed objects/classes/triggers, then -> adding Storage to Salesforce servers that aren't needed anymore due to the 98% reduction in wasted processing time. This extra storage could then be offered back to customers to grow their config Sandboxes - because that's not even close to reasonable.
01-30-2013 11:39 AM
But, from my perspective, we consider the time well spent-- our product is more polished because of this feature, but development takes longer. The last eleven months here have been mostly shoot-first-ask-questions-later instead of formal project documentation, though.
01-30-2013 11:54 AM
Which all basically state "make testing better", either by not running tests for non-code deployments, targeting specific failed tests, etc.