Latest Event Updates
Quote Posted on
Human resources (HR) analytics helps you get insights into every aspect of your organization’s resourcing from hire and onboarding right through to leave and termination by combining data from many data sources. It helps you get insights into, and make predictions about, who and how many people you need to hire, how they need to […]
Proof of Concept & Discovery Phase for Data Analytics Platform — Technology News — John Nelson’s Blog
Quote Posted on
The client is a large, healthcare-‐focused strategic media planning and buying group of several agencies. The group offers media planning for different channels by analyzing existing data available within each agency. They wanted to develop an analytics platform to improve executive decision-‐making across the agency. To help our client be strategic about creating such a […] […]
Quote Posted on Updated on
The client is a large, healthcare-‐focused strategic media planning and buying group of several agencies. The group offers media planning for different channels by analyzing existing data available within each agency. They wanted to develop an analytics platform to improve executive decision-‐making across the agency. To help our client be strategic about creating such a […]
I first encountered this error one Saturday evening while working from home. I have several devices sharing a not-so-good wireless router and I have experienced it a few times during periods of high internet activity within my house. I mention this because one of the common causes of this error is an abrupt closure of a TFS connection by the client machine which can be due to an unreliable internet connection.
Common Root Causes
The common root causes of a TF400324 error are:
- An abrupt termination of TFS services from a client machine
- A corrupt TFS client installation
- Multiple versions of TFS (2010, 2012, 2013) setup on the same client machine
- Multiple team project collections across possibly multiple versions of TFS (that can be fun!)
In my case, the cause was the first one.
The quick resolution to this problem is to clear your local TFS cache. Don’t worry, this will not cause issues with any pending check-ins that you may have.
Follow these steps:
- Close ALL instances of Visual Studio and ensure that there are no devenv.exe processes running.
- Open My Computer and browse to your c:\Users\<UserName>\AppData\Local\Microsoft\Team Foundation\ folder and depending on the version of TFS that you are running, open the folder that corresponds to that version. Open the version folder and delete all files and sub-folders.
- Open Visual Studio again and select Connect to Team Foundation Services from the Team menu.
- If the connection fails, find the Sign In link at the bottom left corner of the dialog box and click it. If you are prompted for your credentials, enter them and continue. See the screenshot below.
Once you re-establish your connection you will see that all pending check-ins are still in tact and you are good to go.
Earlier this week, I used the ASP.NET AJAX RadScheduler for the first time. Once I set the necessary properties and tried to debug the page on which the RadScheduler (version 2014.2.618.45) control resided, even though I was performing no binding at all yet, I received the following error:
Page_Validators is undefined
After some research, I discovered that client-side validators work differently in ASP.NET 4.5. Microsoft’s methodology utilizes jQuery and when using a RadScriptManager you can encounter difficulties. The expectation is that you will have jQuery in the global jQuery variable, but Telerik provides it via the $telerik field.
Short of registering my own jQuery on the page to get around this, I decided to simply disable the UnobtrusiveValidationMode by adding the following key to my <appSettings> section:
<add key=”ValidationSettings:UnobtrusiveValidationMode” value=”None”/>
In this very short post, we are going to briefly discuss a couple of common errors encountered in TFS 2012 when using local workspaces, running multiple instances of Visual Studio 2012, and having a large codebase.
If you are using TFS 2012 (including TFS 2012 Express) you may encounter one of the following errors:
- TF400017 – The workspace properties table for the local workspace [name] could not be opened.
- TF400030 – The local data store is currently in use by another operation. Please wait and then try your operation again. If this error persists, restart the application.
Pay particular attention to the highlighted word, local. These errors generally occur when you have a local workspace.
I was able to resolve this problem on my development machine by simply making sure that my workspace location was set to Server. Though this is the standard configuration we utilize internally, for some reason I had erroneously set my new workspace and forgotten to change its location. 😦
To ensure your workspace is a Server workspace, open your Team Explorer, select Source Control Explorer, then select your workspace and make sure that the location is set as shown below:
Once you set the workspace location to Server, the poor performance and unpredictability should disappear! Happy coding!
In the days of waterfall, software testers created test scripts from the mounds of requirements and rigidly and methodically executed those test scripts against the functionality that the development team delivered. The bug remediation process was often a sizable endeavor that took place in a large way as we approached milestones and immediately before we had to deliver software. In many cases, development stopped, and developers sat around and waited for testing results for hours or sometimes days. The testers and developers often did not work together as a cohesive group and in the case of larger projects with dozens of team members, the communication between testers and developers was routinely done late in the process and was not very effective.
With Agile, testers and developers not only work closely together during development, they communicate openly and can foster a great deal of synergy. The “us versus them” mindset that often crept into waterfall development is gone and the two roles work together to develop and deliver the best products possible. We no longer “throw the software over the wall” for the QA group to pick apart, but we utilize our own individual roles, skill sets, understanding of the problem domain, and unique insights into not just the product we are developing but the problems and processes that our customer lives with everyday.
Continuous Testing and Perpetual Quality
Testing in an Agile environment is continuous. Developers test rigorously as they develop, and often turn to the testers and business analysts to provide functional expertise. In other words, though testing is done continuously, the routine interaction between developers and testers often puts the testers in an advisory role as well. Since the development team often publishes or deploys new functionality into the development/test environment(s) on nearly a daily basis, verification and feedback from the testers is immediate.
Testing and Quality Assurance under Agile is not a phase at the end of a development effort. Rather, it is seamlessly woven into the process and is an integral part of the total endeavor. There is no separation between developing and testing – testing is just an ongoing activity that ensures the validity and correctness of the functionality that is being developed.
Agile Testers are Involved!
In an Agile environment, testers are actively involved in discussions with the customer and the creation of user stories. Along with the business analysts, they develop an intimate knowledge of the true needs of the customer and are directly involved in the software design strategy from the standpoint of testing. Beyond this, they actively participate in the segregation and decomposition of user stories and the estimation of the overall development and testing efforts for each.
In contrast to the days of waterfall where testers were often viewed as bottlenecks and impediments, with Agile they actively propel the team and the effort forward with their insights, understanding of the customer’s needs, and they do all of this with a keen eye on not just the act of testing, but the testability of the functionality created.
Agile Testers Become Subject Matter Experts (SMEs)
As we stated in the previous section, with Agile, testers are actively involved in discussions with the customers. This includes senior management, functional management, and the people who are in the middle of the day-to-day operations of the business. The testing practice within an Agile environment places the testers in an advisory role in a way that waterfall never could have. This means that part of the normal, continuous interaction between developers and testers includes testers providing subject matter expertise in many situations. Therefore, testers should take it upon themselves to be SMEs. In fact, the testers and the business analysts should collectively possess nearly the entire body of knowledge of the functional personnel with whom they gleaned the user stories that drive development. That sounds like a tall order but it is necessary to ensure the effective delivery of acceptable products. It allows the Agile development team to be as self-supporting as possible while providing a basis from which software that truly meets the needs of the customer can be developed.
What Makes a Good User Story?
A good user story is complete, accurate, detailed, and from the standpoint of QA, it is testable! Though this may sound nonsensical because after all, don’t we already know that? We should know that, but it is important to explicitly make the statement. A good user story is testable.
Must Haves – Continuous Integration and Automated Testing
Continuous Integration is a concept that calls for every developer to continuously integrate his/her changes into the master code base. This obviously necessitates a suitable source control repository such as Microsoft Team Foundation Server (TFS), and there are many others that serve the purpose very well. Regardless of the chosen source control tool, the process dictates that every change be compiled/built against the latest code base prior to any changes being checked in. Each developer must adhere to this, and it’s pretty simple.
Beyond that, automated builds should be implemented. The build server(s) should be configured to get the latest version of the code base and compile/build the code periodically throughout the day or at least based on a pre-defined schedule. Once the latest code has been retrieved, the build server(s) should then execute any prescribed automated tests. If the code coverage is high enough, problems that would go unnoticed for days or weeks can be uncovered very quickly to facilitate rapid remediation. The topic of code coverage is pretty important because the purpose of automated tests is to holistically ensure the integrity of the code base at all levels. This cannot be done unless the code coverage is as close to 100% as realistically possible.
As the software we build becomes more complex and we have multiple layers that are ideally loosely-coupled, the need for automated unit testing becomes paramount. With unit testing, individual tests have extremely narrow focus. As we move up the layer stack, we don’t deviate from this but we understand that when we reach the business logic layer (BLL), we may have dozens of business services with potentially hundreds or thousands of methods, each of which relies upon some lower-level object or objects. If we build our unit tests from the bottom up, consider all the possible scenarios and outcomes, and we orchestrate our automated tests to exercise all layers of our code from the bottom up, our code coverage approaches 100%. The daily or periodic execution of these batteries of tests ensures that the integrity of our application’s layers remains in tact as we move through development. Remember, Continuous Integration requires us to always know that our code base is sound.
Automated testing tools and cohesive automated testing methodologies are critical to the delivery of quality software under Agile. Developers must have a sense of ownership in every piece of functionality they develop. Unit testing is vital to their development process and code coverage should be very high. I don’t want to go too deeply into the specifics of automated testing in this post, but it is important to recognize its importance with regard to Agile.
In summary, within an Agile environment, testing is not a phase at the end of the development initiative. Instead, it is an integral part of the overall development effort. It is executed daily by developers who not only write unit tests for their code, but who diligently test all functionality they build. Testing and QA in general is sewn into the fabric of the overall initiative and testers work directly with the developers on a daily basis, not just specifically as QA providers, but as SMEs and advisers. Testers in the Agile environment work directly with the customer and serve a very important role in combination with the business analysts. With Agile, our overall view of testing changes dramatically from the days of waterfall and we live by the concept of perpetual quality.